David Kirkpatrick

June 17, 2009

The latest cybersecurity news

This release is from todayand covers the most up-to-date cybersecurity work done for national defense. Given the information society and interconnectedness of today’s world, cybersecurity is a very real matter of national defense. At the same time it’s an area frought with privacy and other concerns.

The release:

NIST, DOD, intelligence agencies join forces to secure US cyber infrastructure

The National Institute of Standards and Technology (NIST), in partnership with the Department of Defense (DOD), the Intelligence Community (IC), and the Committee on National Security Systems (CNSS), has released the first installment of a three-year effort to build a unified information security framework for the entire federal government. Historically, information systems at civilian agencies have operated under different security controls than military and intelligence information systems. This installment is titled NIST Special Publication 800-53, Revision 3, Recommended Security Controls for Federal Information Systems and Organizations.

“The common security control catalog is a critical step that effectively marshals our resources,” says Ron Ross, NIST project leader for the joint task force. “It also focuses our security initiatives to operate effectively in the face of changing threats and vulnerabilities. The unified framework standardizes the information security process that will also produce significant cost savings through standardized risk management policies, procedures, technologies, tools and techniques.”

This publication is a revised version of the security control catalog that was previously published in response to the Federal Information Security Management Act (FISMA) of 2002. This special publication contains the catalog of security controls and technical guidelines that federal agencies use to protect their information and technology infrastructure.

When complete, the unified framework will result in the defense, intelligence and civil communities using a common strategy to protect critical federal information systems and associated infrastructure. This ongoing effort is consistent with President Obama’s call for “integrating all cybersecurity policies for the government” in his May 29 speech on securing the U.S. cybersecurity infrastructure.

The revised security control catalog in SP 800-53 provides the most state-of-the-practice set of safeguards and countermeasures for information systems ever developed. The updated security controls—many addressing advanced cyber threats—were developed by a joint task force that included NIST, DOD, the IC and the CNSS with specific information from databases of known cyber attacks and threat information.

Additional updates to key NIST publications that will serve the entire federal government are under way. These will include the newly revised SP 800-37, which will transform the current certification and accreditation process into a near real-time risk management process that focuses on monitoring the security state of federal information systems, and SP 800-39, which is an enterprise-wide risk management guideline that will expand the risk management process.

 ###

 NIST Special Publication 800-53, Revision 3, is open for public comment through July 1, 2009. The document is available online at http://csrc.nist.gov/publications/PubsDrafts.html#800-53_Rev3. Comments should be sent to sec-cert@nist.gov.

Protests continue in Iran

Filed under: Politics — Tags: , , , , , — David Kirkpatrick @ 2:39 pm

From all accounts the ongoing election protests in Iran are relatively peaceful. I’ve read some accounts that make the situation out to be a game of reverse chicken where the first side to go openly violent will end up the loser. At this point I think it’s pretty clear the previous status quo has lost. Regardless the outcome, the legitimacy of the post-1979 government is either significantly reduced or possibly gone altogether.

From the link:

The protesters marched silently down a major thoroughfare, some holding photographs of the main opposition candidate in Friday’s vote, Mir Hussein Moussavi. Others lifted their bare hands high in the air, signifying their support for Mr. Moussavi with green ribbons tied around their wrists or holding their fingers in a victory sign.

The scope and description of the demonstration was provided by participants who were reached by telephone, as well as photographs taken participants and journalists despite warnings by the authorities against reporting on the event. All accredited in Iran have been ordered to remain in their offices.

It was the fifth day of unrest since election officials declared a landslide victory for the incumbent, President Mahmoud Ahmadinejad.

This bit from the same link strikes me as patently ridiculous:

The Iranian Foreign Ministry, meanwhile, summoned the Swiss ambassador, who represents American interests in Tehran, to complain of “interventionist” statements by American officials, state-run media reported.

If anything the White House is playing this very smoothly and not providing any fuel for “Great Satan influence” rhetoric from the Iranian government.

Of course some on the neocon right don’t see things quite like anyone else.

To wit (from a Robert Kagan op-ed):

It’s not that Obama preferred a victory by Mahmoud Ahmadinejad. He probably would have been happy to do business with Mir Hossein Mousavi, even if there was little reason to believe Mousavi would have pursued a different approach to the nuclear issue. But once Mousavi lost, however fairly or unfairly, Obama objectively had no use for him or his followers. If Obama appears to lend support to the Iranian opposition in any way, he will appear hostile to the regime, which is precisely what he hoped to avoid.

Obama’s policy now requires getting past the election controversies quickly so that he can soon begin negotiations with the reelected Ahmadinejad government.

And with this line of fantasy the neocons fade a little deeper back into history ready to be mothballed in think tanks for another 35 or so years.

Kagan’s outrageous op-ed was immediately countered by the blogosphere.

Here’s Matt Duss:

But I have to say, Mr. Kagan, your op-ed this morning is really beneath you. You can’t actually believe that President Obama is “siding with the Iranian regime” against the Iranian people, or that Obama’s outreach to Iran depends upon keeping hardliners in power, can you? You’re far too intelligent to buy the brutishly simplistic “realism” that you attempt to hang upon President Obama’s approach. These sorts of claims are better left to your friend and occasional co-author Bill Kristol, who uses his series of valuable journalistic perches (with which he inexplicably continues to be gifted) to launch an endless stream of comically transparent bad faith arguments. You’re better than that. You’re the smart neocon.

I wish the best of luck to the people of Iran. People who deserve the modern society denied them for many years. I’m disappointed, but no surprised, the neocon, pro-Israel right would attempt to inject U.S. politics into a situation that belongs to one Middle East nation, and one nation alone, at this time.

June 16, 2009

NSA and domestic surveillance

This New York Times report on the National Security Agency and ongoing domestic spyingis troubling. One of the largest problems with police state apparatus is how pernicious it becomes. Once in place it’s very, very difficult to root out. Every freedom lost is a freedom you can’t expect to get back.

From the link:

Since April, when it was disclosed that the intercepts of some private communications of Americans went beyond legal limits in late 2008 and early 2009, several Congressional committees have been investigating. Those inquiries have led to concerns in Congress about the agency’s ability to collect and read domestic e-mail messages of Americans on a widespread basis, officials said. Supporting that conclusion is the account of a former N.S.A. analyst who, in a series of interviews, described being trained in 2005 for a program in which the agency routinely examined large volumes of Americans’ e-mail messages without court warrants. Two intelligence officials confirmed that the program was still in operation.

Both the former analyst’s account and the rising concern among some members of Congress about the N.S.A.’s recent operation are raising fresh questions about the spy agency.

Representative Rush Holt, Democrat of New Jersey and chairman of the House Select Intelligence Oversight Panel, has been investigating the incidents and said he had become increasingly troubled by the agency’s handling of domestic communications.

In an interview, Mr. Holt disputed assertions by Justice Department and national security officials that the overcollection was inadvertent.

“Some actions are so flagrant that they can’t be accidental,” Mr. Holt said.

The stimulus plan, COBRA and business

I’ve done recent blogging on COBRA and the stimulus plan, but the topic is still fairly confusing in terms of how the unemployed obtain the subsidy and how this program ties into existing ex-employer based COBRA health insurance.

The link in this graf doesn’t make things perfectly clear, but it does offer some interesting ideas on the corporate side in maximizing benefits for both the company and the recently laid-off worker.

From the link:

However, to the extent that an employer subsidizes all or a portion of COBRA benefits following a set of employee layoffs, the employer subsidy period also reduces the length of the federal subsidy period.

For example, assume an employer subsidizes COBRA coverage for three months following a layoff.

Under this scenario, the employee would only be eligible for six months of the federal COBRA subsidy rather than the full time allotted.

For the above reasons, I recommend that employers provide employees with additional severance benefits and eliminate their corporate subsidy for COBRA coverage.

This saves corporate resources, and employees may take maximum advantage of the federal COBRA subsidy.

As an alternative to eliminating their corporate COBRA subsidy, employers may elect to measure COBRA from the “loss of coverage” rather than the actual qualifying event.

June 15, 2009

This revolution is not televised

Filed under: Media, Politics, Technology — Tags: , , , , , — David Kirkpatrick @ 5:51 pm

I’m going to assume the televised media will eventually pick up the ball on the ongoing situation in Iran. It’s only the most important geopolitical story out there. Thirty years after deposing the Shah, Iranians are rejecting both a sham election and the corrupt Islamic leadership.

Of course if you want any serious coverage of the Iranian green revolution you need to hit the BBC, the blogosphere, NYT’s website or Twitter. For the most part mainstream media is proving its irrelevancy once again. The Sunday edition of my local paper had exactly zero mention of Iran on its front page. Sadly I can’t type “unbelievable” because utter crap has become par for the course.

Hit the link for a Twitter #iranelection hashtag search.

June 14, 2009

Tracking the coup in Iran

Filed under: Media, Politics, Technology — Tags: , , , , — David Kirkpatrick @ 6:47 pm

The Daily Dish has been indispensible along with many, many other online resources. Twitter has apparently been indispensible among services in Iran.

As an app Twitter is still an infant battlling growing pains, hype and speculation on monetizing. What is amazing is how those 140 characters affected the San Diego wildfires and now an ongoing international situation where mainstream media is repeatedly dropping the ball. Web 2.0 is proving to be much more revolutionary than anyone could have guessed.

Hit the link for a Twitter search on the hashtag #iranelection.

June 13, 2009

Civil war in Iran?

Filed under: Politics — Tags: , , , , , , , — David Kirkpatrick @ 2:16 pm

Looks to be very likely given the stolen election and the enthusiastic level of voting and support for Mousavi.

Andrew Sullivan has done a great job of covering the election and its aftermath including many insights from his myriad of readers.

Looks like even Iran’s monitors are calling the results election fraud.

From the link:

A Farsi speaking military reader confirms the post here, perhaps the most important aspect of which was that Iran’s own election monitors have allegedly declared the election a fraud.

June 12, 2009

Dell earns $3M from Twitter account

Filed under: Business, Technology — Tags: , , , , — David Kirkpatrick @ 3:29 pm

At least it claims as much. It’ll be interesting to see how many large companies announce ROI from Twitter — and really, what is the “investment?;” the salary of the employee creating and responding to tweets? — and how many companies with little or no brand identity fare with aggressive social networking.

From the link:

Dell Computers announced last night that it has surpassed $3 million in sales via links from one of its Twitter accounts, making one of the most high profile examples of social media Return on Investment (ROI) all the more juicy.Telling your reluctant boss that social media is worth using because Dell made $3 million on Twitter, however, runs the risk of encouraging e-commerce broadcast as the model for engagement in conversation. Other, more conversational, examples of ROI make important additions to conversations about Dell and social media. (They also concern a lot more money.)

(Hat tip: @Rex7 RT @prebynski)

Web 2.0 and security

Filed under: Business, Media, Technology — Tags: , , , , — David Kirkpatrick @ 3:16 pm

Here’s a group of four good security points from CIO.com to keep in mind when engaging in web 2.0/web 3.0/social networking.

Number four from the list:

4) Sadly, You Really Can’t Trust Your Friends or Your Social Network
As a tweet from the Websense Security Labs recently stated, “Web threats delivered via your personal Web 2.0 social network is the new black — do not automatically trust suspicious messages from friends.” The social networking explosion has created new ways of delivering threats. Web users are so accustomed to receiving tweets with shortened URLs, video links posted to their Facebook pages and email messages purportedly from the social networking sites themselves that most people don’t even hesitate to click on a link because they trust the sender.

The unfortunate reality is that criminals are taking advantage of that trust to disseminate malware and links to infected Web sites. Websense Security Labs recently found examples of e-mails sent from what appeared to be Facebook, but were really from criminals that encouraged users to click on a link to a “video” that was actually a page infected with malware.

Graphene and tunable semiconductors

A double dose of graphene news for tonight.

The release:

Tunable semiconductors possible with hot new material called graphene

Tunable bandgap means tunable transistors, LEDs and lasers

Berkeley — Today’s transistors and light emitting diodes (LED) are based on silicon and gallium arsenide semiconductors, which have fixed electronic and optical properties.

Now, University of California, Berkeley, researchers have shown that a form of carbon called graphene has an electronic structure that can be controlled by an electrical field, an effect that can be exploited to make tunable electronic and photonic devices.

While such properties were predicted for a double layer of graphene, this is the first demonstration that bilayer graphene exhibits an electric field-induced, broadly tunable bandgap, according to principal author Feng Wang, UC Berkeley assistant professor of physics.

The bandgap of a material is the energy difference between electrons residing in the two most important states of a material – valence band states and conduction band states – and it determines the electrical and optical properties of the material.

“The real breakthrough in materials science is that for the first time you can use an electric field to close the bandgap and open the bandgap. No other material can do this, only bilayer graphene,” Wang said.

Because tuning the bandgap of bilayer graphene can turn it from a metal into a semiconductor, a single millimeter-square sheet of bilayer graphene could potentially hold millions of differently tuned electronic devices that can be reconfigured at will, he said.

Wang, post-doctoral fellow Yuanbo Zhang, graduate student Tsung-Ta Tang and their UC Berkeley and Lawrence Berkeley National Laboratory (LBNL) colleagues report their success in the June 11 issue of Nature.

“The fundamental difference between a metal and a semiconductor is this bandgap, which allows us to create semiconducting devices,” said coauthor Michael Crommie, UC Berkeley professor of physics. “The ability to simply put a material between two electrodes, apply an electric field and change the bandgap is a huge deal and a major advance in condensed matter physics, because it means that in a device configuration we can change the bandgap on the fly by sending an electrical signal to the material.”

Graphene is a sheet of carbon atoms, each atom chemically bonded to its three neighbors to produce a hexagonal array that looks a lot like chicken wire. Since it was first isolated from graphite, the material in pencil lead, in 2004, it has been a hot topic of research, in part because solid state theory predicts unusual electronic properties, including a high electron mobility more than 10 times that of silicon.

However, the property that makes it a good conductor – its zero bandgap – also means that it’s always on.

“To make any electronic device, like a transistor, you need to be able to turn it on or off,” Zhang said. “But in graphene, though you have high electron mobility and you can modulate the conductance, you can’t turn it off to make an effective transistor.”

Semiconductors, for example, can be turned off because of a finite bandgap between the valence and conduction electron bands.

While a single layer of graphene has a zero bandgap, two layers of graphene together theoretically should have a variable bandgap controlled by an electrical field, Wang said. Previous experiments on bilayer graphene, however, have failed to demonstrate the predicted bandgap structure, possibly because of impurities. Researchers obtain graphene with a very low-tech method: They take graphite, like that in pencil lead, smear it over a surface, cover with Scotch tape and rip it off. The tape shears the graphite, which is just billions of layers of graphene, to produce single- as well as multi-layered graphene.

Wang, Zhang, Tang and their colleagues decided to construct bilayer graphene with two voltage gates instead of one. When the gate electrodes were attached to the top and bottom of the bilayer and electrical connections (a source and drain) made at the edges of the bilayer sheets, the researchers were able to open up and tune a bandgap merely by varying the gating voltages.

The team also showed that it can change another critical property of graphene, its Fermi energy, that is, the maximum energy of occupied electron states, which controls the electron density in the material.

“With top and bottom gates on bilayer graphene, you can independently control the two most important parameters in a semiconductor: You can change the electronic structure to vary the bandgap continuously, and independently control electron doping by varying the Fermi level,” Wang said.

Because of charge impurities and defects in current devices, the graphene’s electronic properties do not reflect the intrinsic graphene properties. Instead, the researchers took advantage of the optical properties of bandgap materials: If you shine light of just the right color on the material, valence electrons will absorb the light and jump over the bandgap.

In the case of graphene, the maximum bandgap the researchers could produce was 250 milli-electron volts (meV). (In comparison, the semiconductors germanium and silicon have about 740 and 1,200 meV bandgaps, respectively.) Putting the bilayer graphene in a high intensity infrared beam produced by LBNL’s Advanced Light Source (ALS), the researchers saw absorption at the predicted bandgap energies, confirming its tunability.

Because the zero to 250 meV bandgap range allows graphene to be tuned continuously from a metal to a semiconductor, the researchers foresee turning a single sheet of bilayer graphene into a dynamic integrated electronic device with millions of gates deposited on the top and bottom.

“All you need is just a bunch of gates at all positions, and you can change any location to be either a metal or a semiconductor, that is, either a lead to conduct electrons or a transistor,” Zhang said. “So basically, you don’t fabricate any circuit to begin with, and then by applying gate voltages, you can achieve any circuit you want. This gives you extreme flexibility.”

“That would be the dream in the future,” Wang said.

Depending on the lithography technique used, the size of each gate could be much smaller than one micron – a millionth of a meter – allowing millions of separate electronic devices on a millimeter-square piece of bilayer graphene.

Wang and Zhang also foresee optical applications, because the zero-250 meV bandgap means graphene LEDs would emit frequencies anywhere in the far- to mid-infrared range. Ultimately, it could even be used for lasing materials generating light at frequencies from the terahertz to the infrared.

“It is very difficult to find materials that generate light in the infrared, not to mention a tunable light source,” Wang said.

Crommie noted, too, that solid state physicists will have a field day studying the unusual properties of bilayer graphene. For one thing, electrons in monolayer graphene appear to behave as if they have no mass and move like particles of light – photons. In tunable bilayer graphene, the electrons suddenly act as if they have masses that vary with the bandgap.

“This is not just a technological advance, it also opens the door to some really new and potentially interesting physics,” Crommie said.

 

###

 

Wang, Zhang, Tang and their colleagues continue to explore graphene’s electronic properties and possible electronic devices.

Their coauthors are Crommie, Alex Zettl and Y. Ron Shen, UC Berkeley professors of physics; physics post-doctoral fellow Caglar Girit; and Zhao Hao and Michael C. Martin of LBNL’s ALS Division. Zhang is a Miller Post-doctoral Fellow at UC Berkeley.

The work was supported by the U.S. Department of Energy.

Assembly with graphene

Interesting research on the properties of one of the more exciting nanotech materials out there.

The release:

Penn materials scientist finds plumber’s wonderland on graphene

IMAGE: This is an electron micrograph showing the formation of interconnected carbon nanostructures on a graphene substrate, which may be harnessed to make future electronic devices.

Click here for more information. 

PHILADELPHIA –- Engineers from the University of Pennsylvania, Sandia National Laboratories and Rice University have demonstrated the formation of interconnected carbon nanostructures on graphene substrate in a simple assembly process that involves heating few-layer graphene sheets to sublimation using electric current that may eventually lead to a new paradigm for building integrated carbon-based devices.

Curvy nanostructures such as carbon nanotubes and fullerenes have extraordinary properties but are extremely challenging to pick up, handle and assemble into devices after synthesis. Penn materials scientist Ju Li and Sandia scientist Jianyu Huang have come up with a novel idea to construct curvy nanostructures directly integrated on graphene, taking advantage of the fact that graphene, an atomically thin two-dimensional sheet, bends easily after open edges have been cut on it, which can then fuse with other open edges permanently, like a plumber connecting metal fittings.

The “knife” and “welding torch” used in the experiments, which were performed inside an electron microscope, was electrical current from a Nanofactory scanning probe, generating up to 2000°C of heat. Upon applying the electrical current to few-layer graphene, they observed the in situ creation of many interconnected, curved carbon nanostructures, such as “fractional nanotube”-like graphene bi-layer edges, or BLEs; BLE rings on graphene equivalent to “anti quantum-dots”; and nanotube-BLE assembly connecting multiple layers of graphene.

Remarkably, researchers observed that more than 99 percent of the graphene edges formed during sublimation were curved BLEs rather than flat monolayer edges, indicating that BLEs are the stable edges in graphene, in agreement with predictions based on symmetry considerations and energetic calculations. Theory also predicts these BLEs, or “fractional nanotubes,” possess novel properties of their own and may find applications in devices.

The study is published in the current issue of the journal Proceedings of the National Academy of Sciences. Short movies of the fabrication of these nanostructures can be viewed at www.youtube.com/user/MaterialsTheory.

Li and Huang observed the creation of these interconnected carbon nanostructures using the heat of electric current and a high-resolution transmission electron microscope. The current, once passed through the graphene layers, improved the crystalline quality and surface cleanness of the graphene as well, both important for device fabrication.

The sublimation of few-layer graphene, such as a 10-layer stack, is advantageous over the sublimation of monolayers. In few-layer graphene, layers spontaneously fuse together forming nanostructures on top of one or two electrically conductive, extended, graphene sheets.

During heating, both the flat graphene sheets and the self-wrapping nanostructures that form, like bilayer edges and nanotubes, have unique electronic properties important for device applications. The biggest obstacle for engineers has been wrestling control of the structure and assembly of these nanostructures to best exploit the properties of carbon. The discoveries of self-assembled novel carbon nanostructures may circumvent the hurdle and lead to new approach of graphene-based electronic devices.

Researchers induced the sublimation of multilayer graphene by Joule-heating, making it thermodynamically favorable for the carbon atoms at the edge of the material to escape into the gas phase, leaving freshly exposed edges on the solid graphene. The remaining graphene edges curl and often welded together to form BLEs. Researchers attribute this behavior to nature’s driving force to reduce capillary energy, dangling bonds on the open edges of monolayer graphene, at the cost of increased bending energy.

“This study demonstrates it is possible to make and integrate curved nanostructures directly on flat graphene, which is extended and electrically conducting,” said Li, associate professor in the Department of Materials Science and Engineering in Penn’s School of Engineering and Applied Science. “Furthermore, it demonstrates that multiple graphene sheets can be intentionally interconnected. And the quality of the plumbing is exceptionally high, better than anything people have used for electrical contacts with carbon nanotubes so far. We are currently investigating the fundamental properties of graphene bi-layer edges, BLE rings and nanotube-BLE junctions.”

 

###

 

The study was performed by Li and Liang Qi of Penn, Jian Yu Huang and Ping Lu of the Center for Integrated Nanotechnologies at Sandia and Feng Ding and Boris I. Yakobson of the Department of Mechanical Engineering and Materials Science at Rice.

It was supported by the National Science Foundation, the Air Force Office of Scientific Research, the Honda Research Institute, the Department of Energy and the Office of Naval Research.

Getting a little gray?

Filed under: et.al., Science — Tags: , , , , , — David Kirkpatrick @ 12:50 am

Maybe it is stress after all.

The release:

Stress makes your hair go gray

Those pesky graying hairs that tend to crop up with age really are signs of stress, reveals a new report in the June 12 issue of Cell, a Cell Press publication.

Researchers have discovered that the kind of “genotoxic stress” that does damage to DNA depletes the melanocyte stem cells (MSCs) within hair follicles that are responsible for making those pigment-producing cells. Rather than dying off, when the going gets tough, those precious stem cells differentiate, forming fully mature melanocytes themselves. Anything that can limit the stress might stop the graying from happening, the researchers said.

“The DNA in cells is under constant attack by exogenously- and endogenously-arising DNA-damaging agents such as mutagenic chemicals, ultraviolet light and ionizing radiation,” said Emi Nishimura of Tokyo Medical and Dental University. “It is estimated that a single cell in mammals can encounter approximately 100,000 DNA damaging events per day.”

Consequently, she explained, cells have elaborate ways to repair damaged DNA and prevent the lesions from being passed on to their daughter cells.

“Once stem cells are damaged irreversibly, the damaged stem cells need to be eliminated to maintain the quality of the stem cell pools,” Nishimura continued. “We found that excessive genotoxic stress triggers differentiation of melanocyte stem cells.” She says that differentiation might be a more sophisticated way to get rid of those cells than stimulating their death.

Nishimura’s group earlier traced the loss of hair color to the gradual dying off of the stem cells that maintain a continuous supply of new melanocytes, giving hair its youthful color. Those specialized stem cells are not only lost, they also turn into fully committed pigment cells and in the wrong place.

Now, they show in mice that irreparable DNA damage, as caused by ionizing radiation, is responsible. They further found that the “caretaker gene” known as ATM (for ataxia telangiectasia mutated) serves as a so-called stemness checkpoint, protecting against MSCs differentiation. That’s why people with Ataxia-telangiectasia, an aging syndrome caused by a mutation in the ATM gene, go gray prematurely.

The findings lend support to the notion that genome instability is a significant factor underlying aging in general, the researchers said. They also support the “stem cell aging hypothesis,” which proposes that DNA damage to long-lived stem cells can be a major cause for the symptoms that come with age. In addition to the aging-associated stem cell depletion typically seen in melanocyte stem cells, qualitative and quantitative changes to other body stem cells have been reported in blood stem cells, cardiac muscle, and skeletal muscle, the researchers said. Stresses on stem cell pools and genome maintenance failures have also been implicated in the decline of tissue renewal capacity and the accelerated appearance of aging-related characteristics.

“In this study, we discovered that hair graying, the most obvious aging phenotype, can be caused by the genomic damage response through stem cell differentiation, which suggests that physiological hair graying can be triggered by the accumulation of unavoidable DNA damage and DNA-damage response associated with aging through MSC differentiation,” they wrote.

 

###

 

The researchers include Ken Inomata, Kanazawa University, Takaramachi, Kanazawa, Ishikawa, Japan, KOSÉ Corporation, Tokyo, Japan, Hokkaido University Graduate School of Medicine; Takahiro Aoto, Kanazawa University, Takaramachi, Kanazawa, Ishikawa, Japan, Tokyo Medical and Dental University, Tokyo, Japan; Nguyen Thanh Binh, Kanazawa University, Takaramachi, Kanazawa, Ishikawa, Japan; Natsuko Okamoto, Kanazawa University, Takaramachi, Kanazawa, Ishikawa, Japan, Kyoto University Graduate School of Medicine, Kyoto, Japan; Shintaro Tanimura, Kanazawa University, Takaramachi, Kanazawa, Ishikawa, Japan, Hokkaido University Graduate School of Medicine; Tomohiko Wakayama, Kanazawa University, Ishikawa, Japan; Shoichi Iseki, Kanazawa University, Ishikawa, Japan; Eiji Hara, The Cancer Institute, Japanese Foundation for Cancer Research, Tokyo, Japan; Takuji Masunaga, KOSÉ Corporation, Tokyo, Japan; Hiroshi Shimizu, Hokkaido University Graduate School of Medicine; and Emi K. Nishimura, Kanazawa University, Takaramachi, Kanazawa, Ishikawa, Japan, Tokyo Medical and Dental University, Tokyo, Japan.

Inomata et al.: “Genotoxic Stress Abrogates Renewal of Melanocyte Stem Cells by Triggering Their Differentiation.” Publishing in Cell 137, 1088-1099, June 12, 2009. DOI 10.1016/j.cell.2009.03.037 www.cell.com.

June 11, 2009

Obama won on message and not web 2.0

Filed under: Politics, Technology — Tags: , , , , — David Kirkpatrick @ 5:00 pm

For all the discussion on Obama’s campaign utilizing social media and how web 2.0 were game-changers in 2008, his social media director says Obama’s political message was the key component in his victory.

From the link:

Social media may be the flavor of the moment for corporate marketers but these tools won’t work for everyone, according to the man who led the social media component of Barack Obama’s 2008 presidential campaign, saying it was Obama’s message — and not the medium — that carried the 2008 election.

“Message and messenger are key. This isn’t going to work for every organization or every start-up business if the message that you are selling isn’t resonating,” said Scott Goodstein, the CEO of Revolution Messaging and formerly the external online director at Obama for America, during a speech at the Ad:Tech Singapore conference on Wednesday.

Obama, who was elected president last year, used the Internet and social media — a broad term that encompasses social networking sites, blogs, video-sharing sites like YouTube, and message service Twitter — to spread his views on key topics and organize his supporters. But the candidate, not social media or the Internet, won the election, Goodstein said.

Click here to find out more! “It was an honor to work at the Obama campaign because at that point in American history we had the right candidate, the right message,” he said.

Notes from the latest Fed Beige Book

Filed under: Business, Politics — Tags: , , , — David Kirkpatrick @ 4:52 pm

This link covers material from all twelve regional Federal Reserve banks.

From the link:

DALLAS

(This region covers Texas and parts of New Mexico and Louisiana.)

Activity was weak, but there were “increased reports of stabilization” from manufacturers. Business outlooks slightly more optimistic. Some producers linked to housing construction said the pace of decline in activity had eased and “demand was bumping along the bottom.”

Some improvement in retail sales with customers returning, but “very cost conscious.” Most retailers don’t expect any “solid improvement” until 2010. Homebuilders reported a slight uptick in sales, helped by first-time home buyer tax credit. Commercial real estate activity softened. In the energy sector, demand for oil services and machinery fell as drilling activity plunged. Dry conditions hampered agriculture production in Southwest Texas.

SAN FRANCISCO

(This region covers California, Washington, Oregon, Idaho, Nevada, Utah, Arizona, Hawaii and Alaska.)

Economic activity slowed further, but there were reports pointing to “signs of stabilization or improvement in some sectors.” Retail sales were “feeble” with shoppers favoring inexpensive necessities over luxury goods. Sales rose for grocers, but were “anemic” for retailers of furniture, appliances and electronic items.

Manufacturing activity remained at low levels, but conditions improved for makers of semiconductors and other information technology products. Housing market remained weak but showed signs of improvement. Many areas reported a pickup in sales aided by low mortgage rates, declining prices and high foreclosure rates. The pace of home construction remained very slow. Demand for commercial space fell, with vacancy rates rising. Some tenants have requested and received concessions on lease rates for office and retail space.

June 6, 2009

Saturday video art — passage à l’acte

Filed under: Arts, et.al., Media, Technology — Tags: , , , , — David Kirkpatrick @ 8:47 pm

This video was linked in the comment section at boing boing from this video post of mine and it was just too cool to avoid.

Twelve minutes of “To Kill a Mockingbird” condensed.

The video:

June 5, 2009

Lauding nuclear energy shutdown?

Not sure if this something to be proud of. I bet Sacramento wished Rancho Seco was still operating during those rolling blackouts a few years ago …

The release hot from the inbox:

Nuclear Reactor Shutdown Vote 20 Years Ago Reverberates Today in Actions by 900 Mayors and Renewable Portfolio Standards in 2 Dozen States

“Shot Heard Round the World” Echoes in Strong Local, State Opposition Across U.S. to New Nuclear Reactors

SACRAMENTO, Calif., June 5 /PRNewswire-USNewswire/ — Ahead of the 20th anniversary on Saturday of Sacramento voters going to the polls to shut down Rancho Seco, a nuclear reactor operated by the Sacramento Municipal Utility District (SMUD) about 25 miles southeast of the city, organizers held a news conference today to mark the event.

In his remarks at the news conference, Scott Denman, former executive director of the national Safe Energy Communication Council, emphasized that votes against nuclear power continue to this day.

Since the historic Rancho Seco shutdown vote, more than two dozen states have legislated or passed referenda requiring that utilities meet a specific target – usually ranging 10-30 percent of the electricity supply – for sustainable energy resources by a specific date, Denman said.  Power that will be available from these “renewable portfolio standards” (RPS) sources is now routinely cited as a reason not to pursue more nuclear reactors.

Additionally, Denman noted that more than 900 elected mayors of cities including Denver, Chicago, Portland, Austin, and Salt Lake City have signed the Mayor’s Initiative on Climate Change to use sustainable energy resources to power their jurisdictions to prosperity.

Denman’s prepared remarks for the news conference read as follows:

“Good morning.  I am a national energy policy consultant and the former executive director of the national coalition, Safe Energy Communication Council.  In 1988, and again in 1989, I coordinated the national environmental community in assisting the local sponsors of the ultimately successful ballot initiatives and campaigns to close the Rancho Seco reactor.

Twenty years ago, I hailed the victory as ‘a shot heard ’round the world.’ I said then that the intrepid organizers and those who voted to shutdown the reactor were ‘a new breed of American patriots’ and that this historic vote would spark the shift away from costly and dangerous reactors, and catalyze a movement for clean, affordable, safe, secure energy efficient and renewable energy technologies.  That is exactly what has happened.

Since this pioneering vote in 1989, more than two dozen states have legislated or passed referenda requiring that utilities provide a specific percentage – typically ranging between 10-30 percent of the electricity supply – to be generated by sustainable energy resources by a date certain.  More than 940 mayors of cities like Denver, Chicago, Portland, Austin, and Salt Lake City representing 84 million Americans have signed the Mayor’s Initiative on Climate Change to use sustainable energy resources to power their jurisdictions to prosperity.

By terminating the Rancho Seco reactor, Sacramento’s public power utility, the Sacramento Municipal Utility District (SMUD), today has significantly lower rates than PG&E, Southern California Edison, and many other U.S. utilities.  Indeed, SMUD’s innovative energy efficiency and conservation programs have been replicated with great success.  SMUD’s pioneering work to bring utility grade solar and other renewably produced electricity to the grid has been a viable model for communities and utilities.

Proposed new nuclear reactors would simply be too expensive and also take too long to build.  Since the vote (and some 15 years before it), not one new reactor has been licensed.  Sacramento’s voters were prescient as well as prudent managers of their own pocketbooks.  New reactors are now estimated cost customers about 15 cents per kilowatt-hour on monthly electric bills, more than two times more expensive than wind power. In comparison, energy efficiency improvements cost consumers zero to five cents per kilowatt hour.   One Pennsylvania utility (PPL) has just announced that its proposed reactor will cost ratepayers a staggering $15 billion dollars.  Thus, new reactors are a fiscal black hole and loom as a fool’s gold solution to the growing real threat of global greenhouse gases.

The nuclear and utility industries keep coming back to the public trough for more and more bailouts, handouts, tax breaks, and subsidies.  Now, nuclear cheerleaders in Congress are trying to force you and me, the taxpayers to give away more than $100 billion in dangerous loan guarantees and other financial shell games that shift responsibility for failed nuclear projects on to the backs of the American families and businesses.  The Congressional Budget Office has concluded that 50% of new nuclear reactor loans will default.  The nuclear industry and their lobbyists want us to take the risk while they pocket the profits.  This path is a sure way to repeat the disastrous failure of subprime mortgages and unregulated bad debt that nearly collapsed our entire financial system in the past 12 months.

It’s time to give wind, geothermal, solar and energy efficiency its first real chance.   New reactors would lead us to more lemons like Rancho Seco, deeper national financial debt, and further economic crisis.

We have sustainable energy resources today that we, our children, and our grandchildren can live with.  The bottom line lesson from Ranch Seco 20 years later:  Don’t get fooled by the same old promises of nuclear reactors.  We can’t pay the price.   Thank you.”

Other news event participants included former California State Senator Tom Hayden; former SMUD Board Member Ed Smeloff; and Bob Mulholland, campaign manager, No on Measure K.

Source: Physicians for Social Responsibility, Washington, D.C.
   

Web Site:  http://www.psr.org/ranchoseco

IRS announces tax preparer review

Filed under: Business, Politics — Tags: , , , , — David Kirkpatrick @ 4:26 pm

A release from the Internal Revenue Service:

IRS Launches Tax Return Preparer Review; Recommendations to Improve Compliance Expected by Year End

 
IR-2009-57, June 4, 2009

WASHINGTON — IRS Commissioner Doug Shulman announced today that by the end of 2009, he will propose a comprehensive set of recommendations to help the Internal Revenue Service better leverage the tax return preparer community with the twin goals of increasing taxpayer compliance and ensuring uniform and high ethical standards of conduct for tax preparers.

Some of the potential recommendations could focus on a new model for the regulation of tax return preparers; service and outreach for return preparers; education and training of return preparers; and enforcement related to return preparer misconduct. The Commissioner will submit recommendations to the Treasury Secretary and the President by the end of the year.

“Tax return preparers help Americans with one of their biggest financial transactions each year. We must ensure that all preparers are ethical, provide good service and are qualified,” Shulman said. “At the end the day, tax preparers and the associated industry must be part of our overall game plan to strengthen the integrity of the tax system.”

The first part of this groundbreaking effort will involve fact finding and receiving input from a large and diverse constituent community that includes those that are licensed by state and federal authorities — such as enrolled agents, lawyers and accountants — as well as unlicensed tax preparers and software vendors. The effort will also seek input and dialog with consumer groups and taxpayers.

“We plan to have a transparent and open dialogue about the issues,” Shulman said. “At this early and critical stage of the process, we need to hear from the broadest possible range of stakeholders.”

Later this year, the IRS plans to hold a number of open meetings in Washington and around the country with constituent groups.

More information, including schedules and agendas for public meetings, will be posted on the “Tax Professionals” page on this Web site and will be communicated to stakeholder groups.

Cloud computing and business

Filed under: Business, Technology — Tags: , , , — David Kirkpatrick @ 4:16 pm

I’ve done plenty of blogging about cloud computing in the past and here are two more links on the topic. First up is a BusinessWeek breakdown on how cloud computing will change business and next is the thoughts of Microsoft’s chief software architect, Ray Ozzie, on cloud computing.

From the BusinessWeek link:

In 1990, in a keynote speech at the Comdex computer conference, Microsoft’s (MSFT) then-chief executive, Bill Gates, bolstered his bona fides as a tech visionary when he declared the PC industry would produce advances within a few years that would put information at people’s fingertips. To get there, Gates said, the world needed three things: a more “personal” personal computer, more powerful communications networks, and easy access to a broad range of information. Sometimes visionaries are right on the vision but off on the timing.

Only now is Gates’ grand vision finally becoming a reality for businesses. While pieces of what he had in mind have been available for years, they typically were expensive and difficult to set up and use. Now that more personal PC is here in the form of smartphones and mini-laptops, and broadband wireless networks make it possible for people to be connected almost anytime and anywhere. At the same time, we’re seeing the rise of cloud computing, the vast array of interconnected machines managing the data and software that used to run on PCs. This combination of mobile and cloud technologies is shaping up to be one of most significant advances in the computing universe in decades. “The big vision: We’re finally getting there,” says Donagh Herlihy, chief information officer of Avon Products (AVP). “Today, wherever you are, you can connect to all the information you need.”

And here’s Microsoft’s Ray Ozzie:

Ray Ozzie, Microsoft’s Chief Software Architect and the guest speaker at last night’s dinner (Techmeme), said the company wasn’t necessarily talking or thinking about the cloud when he came on board as part of the acquisition of his company, called Groove Networks, in 2005. When it came time to start offering a new way of thinking about the cloud and software, the approach came slowly. At the event, he said:

In any large organization, the government, the military, Wal-Mart, Microsoft, change of management is a challenge. You cannot effect change by mandate. You can’t say this is the way it’s gonna be and everyone snaps.

Speaking at any event where the topic has to do with cloud computing means that you inevitably are asked to define cloud computing. Clearly, Ozzie must have given a lot of thought to a definition for the cloud but he actually may have given it too much thought. While not quite as babbling as Sen. Ted Stevens’ explanation of how the Internet works (remember the “series of tubes?”), Ozzie’s definition of cloud computing was definitely worthy of a “huh?” head shake.

…self-service on-demand way of accessing resources with a virtualized abstraction that is relatively homogeneous

Wow. That’s a mouthful. But it also goes to show that even someone like the Chief Software Architect at Microsoft struggles with a way to define the cloud. Still, he spoke highly of the work that Microsoft does in the cloud environment, as well as on the client side, to meet the changing needs of all types of customers, from consumers to large enterprise.

C-level finds email most valuable

Filed under: Business, Technology — Tags: , , , , , — David Kirkpatrick @ 2:16 pm

I’d hazard a guess most people, and not just executives,  find email either the top or tied at the top for the most valuable data on a hard drive.

Now for a casual user who  backs up nothing and has a priceless collection of photos or video email could come in second, but many ordinary users who do at least rudimentary backing up of documents, images and other like data fail to back up email folders, and even such mundane-seeming items such as favorites or cookies. After a failure all of these will be missed, probably more than realized.

Back to c-level executives, I can completely see where email is the most critical datato get back. The email inbox is truly a virtual inbox of work-to-do, information to process and documents to attend to. Losing that can be devastating. Too many executives allow the inbox or other email folders become the de facto storage spot for very important information.

Food for thought, and a lesson to remember — back up thoroughly and often.

From the link:

With so much valuable and confidential information in our inboxes, it’s no wonder 81% would recover that data first. There’s a strong legal argument for better backup, too. 

E-mail is the most valued business document, according to a recent survey from Kroll Ontrack Inc.

Kroll asked 200 business executives across Canada, the U.S. and Europe which business documents they would most prefer to recover in the event of data loss. Eighty one per cent reported they would save their mailboxes.

E-mail is of critical importance because it contains so much information, said Dave Pearson, senior storage research analyst with IDC Canada.

“Test contracts to vendors or clients, confidential memos … all sorts of work documents, process documents, presentations, sales materials, all those things pass through your e-mail at different times,” he said.

Large organizations, especially those subject to lawsuits, should have a centralized backup repository for their e-mail, Pearson suggested. “It just makes the discovery process so much easier and so much less expensive for them,” he said.

But many companies still lack a well-thought-out e-mail archival policy. “A lot of companies may not realize how much of their business is contained in their e-mail or how many confidential or important things are said in e-mail that they need to keep track of,” said Pearson.

Backing up e-mail is a high priority in the enterprise and a vital practice for IT, according to George Goodall, senior research analyst at London-based Info-Tech Research Group Inc.

E-mail is very much the lifeblood of any organization, he said. “Many people, especially executives, use e-mail as a knowledge repository … the problem is, it’s a very difficult thing to backup and more importantly, it’s difficult to restore.”

The “ick” factor and conservatives

One explanation given for opposition to same-sex marriage is the so-called “ick” factor. That is revulsion against the very idea of homosexuality by heterosexuals.

Take this for what it’s worth because it smells a lot like a solution in search of problem, but here’s research from Cornell connecting that very response with a group largely against same-sex marriage — political conservatives.

Food for thought if nothing else.

The release (yeah, I know I said the release dump was over with the last post):

Easily grossed out? You’re more likely a conservative,
says Cornell psychologist

Are you someone who squirms when confronted with slime, shudders at stickiness or gets grossed out by gore? Do crawly insects make you cringe or dead bodies make you blanch?

If so, chances are you’re more conservative — politically, and especially in your attitudes toward gays and lesbians — than your less-squeamish counterparts, according to two Cornell studies.

The results, said study leader David Pizarro, Cornell assistant professor of psychology, raise questions about the role of disgust — an emotion that likely evolved in humans to keep them safe from potentially hazardous or disease-carrying environments — in contemporary judgments of morality and purity.

In the first study, published in the journal Cognition & Emotion (Vol.23: No.4), Pizarro and co-authors Yoel Inbar of Harvard University’s Kennedy School of Government and Paul Bloom of Yale University surveyed 181 U.S. adults from politically mixed “swing states.” They subjected these adults to two indexes: the Disgust Sensitivity Scale (DSS), which offers various scenarios to assess disgust sensitivity, and a political ideology scale. From this they found a correlation between being more easily disgusted and political conservatism.

To test whether disgust sensitivity is linked to specific conservative attitudes, the researchers then surveyed 91 Cornell undergraduates with the DSS, as well as with questions about their positions on issues including gay marriage, abortion, gun control, labor unions, tax cuts and affirmative action.

Participants who rated higher in disgust sensitivity were more likely to oppose gay marriage and abortion, issues that are related to notions of morality or purity. The researchers also found a weak correlation between disgust sensitivity and support for tax cuts, but no link between disgust sensitivity and the other issues.

And in a separate study in the current issue of the journal Emotion (Vol.9: No.3), Pizarro and colleagues found a link between higher disgust sensitivity and disapproval of gays and lesbians. For this study, the researchers used implicit measures (measures that have been shown to assess attitudes people may be unwilling to report explicitly; or that they may not even know they possess).

Liberals and conservatives disagree about whether disgust has a valid place in making moral judgments, Pizarro noted. Conservatives have argued that there is inherent wisdom in repugnance; that feeling disgusted about something — gay sex between consenting adults, for example — is cause enough to judge it wrong or immoral, even lacking a concrete reason. Liberals tend to disagree, and are more likely to base judgments on whether an action or a thing causes actual harm.

Studying the link between disgust and moral judgment could help explain the strong differences in people’s moral opinions, Pizarro said; and it could offer strategies for persuading some to change their views.

“People have pointed out for a long time that a lot of our moral values seem driven by emotion, and in particular, disgust appears to be one of those emotions that seems to be recruited for moral judgments,” said Pizarro.

That can have tragic effects — as in cases throughout history where minorities have been victims of discrimination by groups that perceived them as having disgusting characteristics.

The research speaks to a need for caution when forming moral judgments, Pizarro added. “Disgust really is about protecting yourself from disease; it didn’t really evolve for the purpose of human morality,” he said. “It clearly has become central to morality, but because of its origins in contamination and avoidance, we should be wary about its influences.”

The studies were funded by Cornell.

 

##

Fluorescent nanodiamonds

The final nanotech release in this particular release dump. I hate to toss that much raw material on the blog at one fell swoop, but it’s pretty cool information and nice to see so much nanotech research being reported on one day.

The release:

A breakthrough toward industrial production of fluorescent nanodiamonds

The laboratory « Structure – Activité of Normal & Pathologic Biomolecules– SANPB », Inserm / UEVE U829 (Genopole Evry, France) in collaboration with the Material Centre of Mines-ParisTech (Evry, France), the NRG – UMR 5060 CNRS / UTBM (Technology University of Belfort-Montbéliard) and the Physic Institute of Stuttgart University (Germany) discovered a novel route to fabricate fluorescent nanoparticles from diamond microcrystals. Results are published in Nanotechnology June10 2009 issue.

Fluorescence is a major tool in life and material sciences. In biology/medicine, the coupling of fluorescent dyes, to proteins or nucleic acids (RNA, DNA) allows one to investigate their fate and interactions in cultured cells or in the body. Similarly, fluorescence is used in material sciences to detect electromagnetic fields, for optic storage or tracking (notably to detect fake products). However, most of fluorescent dyes are made of molecules with a limited life time due to chemical reactivity.

In this context fluorescent diamond nanoparticles present a valuable alternative thanks to their outstanding photophysical properties. They are very bright and possess long-term non-bleaching, non-blinking fluorescence in the red/NIR region. Based on these unique properties, multiple applications are foreseen in physics, material science, biochemistry and biology. However, until recently, the production of such nanoparticles was limited to the laboratory.

A single route is nowadays taken to fabricate such fluorescent nanoparticles. It consists of irradiating substitutional nitrogen-containing diamond nanocrystals, produced by the diamond industry, with electron or ion beams to create vacancies in the crystal lattice. Isolated substitutional nitrogen atoms then trap a moving vacancy during annealing to form a fluorescent NV centre. Unfortunately, the efficiency and yield of this route are low due to amorphization and the loss of moving vacancies to the surface during irradiation and annealing.

A top-down processing of diamond microcrystals, which are less prone to amorphization and vacancy loss, would provide a more industrially scalable route. However, in this case two barriers have to be surmounted – the difficulties of irradiating large amounts of material and converting microdiamonds into nanocrystals while keeping both fluorescence properties and crystal structure intact.

In a recent study, which is published in Nanotechnology, researchers in France and Germany have explored with success this alternative route to producing homogeneous samples of pure and very small fluorescent diamond nanoparticles with high yield. The fabrication procedure starts with the irradiation of finely controlled micron-size diamonds and requires subsequent milling and purification steps. In this novel process, substitutional nitrogen-containing microdiamonds with defined atomic composition were irradiated using a high-energy electron beam and then annealed at high temperature (800 °C) to create the desired photoluminescent centres in an intact diamond lattice. An original two-step milling protocol was designed to convert the fluorescent microdiamond into very small (down to 4 nm) round-shape nanoparticles of highly pure sp3 diamond with very bright and stable photoluminescent centres.

Such a fine fabrication process can now be used for the large-scale production of fluorescent diamond nanoparticles. One can vary and tailor their properties via the composition of the starting material to answer the needs of future applications. These fluorescent diamond nanoparticles open realistic perspectives to very long term labeling, to quantitative biology and innovative nanotechnology applications in composites, optoelectronics or analytical chemistry.

 

###

 

Reference : « High yield fabrication of fluorescent nanodiamonds », Jean-Paul Boudou, Patrick A. Curmi, Fedor Jelezko, Joerg Wrachtrup, Pascal Aubert, Mohamed Sennour, Gopalakrischnan Balasubramanian, Rolf Reuter, Alain Thorel and Eric Gaffet, 2009, Nanotechnology 20 235602

(see http://www.iop.org/EJ/abstract/0957-4484/20/23/235602/)

Graphene beats copper in IC connections

It’s been a while since I’ve had the chance to blog about graphene, but here is the latest on the carbon nanomaterial.  (Be sure to hit the second link for images.)

The release:

Graphene May Have Advantages Over Copper for Future IC Interconnects

New Material May Replace Traditional Metal at Nanoscale Widths

Atlanta (June 4, 2009) —The unique properties of thin layers of graphite—known as graphene—make the material attractive for a wide range of potential electronic devices. Researchers have now experimentally demonstrated the potential for another graphene application: replacing copper for interconnects in future generations of integrated circuits.

In a paper published in the June 2009 issue of the IEEE journal Electron Device Letters, researchers at the Georgia Institute of Technology report detailed analysis of resistivity in graphene nanoribbon interconnects as narrow as 18 nanometers.

The results suggest that graphene could out-perform copper for use as on-chip interconnects—tiny wires that are used to connect transistors and other devices on integrated circuits. Use of graphene for these interconnects could help extend the long run of performance improvements for silicon-based integrated circuit technology.

“As you make copper interconnects narrower and narrower, the resistivity increases as the true nanoscale properties of the material become apparent,” said Raghunath Murali, a research engineer in Georgia Tech’s Microelectronics Research Center and the School of Electrical and Computer Engineering. “Our experimental demonstration of graphene nanowire interconnects on the scale of 20 nanometers shows that their performance is comparable to even the most optimistic projections for copper interconnects at that scale. Under real-world conditions, our graphene interconnects probably already out-perform copper at this size scale.”

Beyond resistivity improvement, graphene interconnects would offer higher electron mobility, better thermal conductivity, higher mechanical strength and reduced capacitance coupling between adjacent wires.

“Resistivity is normally independent of the dimension—a property inherent to the material,” Murali noted. “But as you get into the nanometer-scale domain, the grain sizes of the copper become important and conductance is affected by scattering at the grain boundaries and at the side walls. These add up to increased resistivity, which nearly doubles as the interconnect sizes shrink to 30 nanometers.”

The research was supported by the Interconnect Focus Center, which is one of the Semiconductor Research Corporation/DARPA Focus Centers, and the Nanoelectronics Research Initiative through the INDEX Center.

Murali and collaborators Kevin Brenner, Yinxiao Yang, Thomas Beck and James Meindl studied the electrical properties of graphene layers that had been taken from a block of pure graphite. They believe the attractive properties will ultimately also be measured in graphene fabricated using other techniques, such as growth on silicon carbide, which now produces graphene of lower quality but has the potential for achieving higher quality.

Because graphene can be patterned using conventional microelectronics processes, the transition from copper could be made without integrating a new manufacturing technique into circuit fabrication.

“We are optimistic about being able to use graphene in manufactured systems because researchers can already grow layers of it in the lab,” Murali noted. “There will be challenges in integrating graphene with silicon, but those will be overcome. Except for using a different material, everything we would need to produce graphene interconnects is already well known and established.”

Experimentally, the researchers began with flakes of multi-layered graphene removed from a graphite block and placed onto an oxidized silicon substrate. They used electron beam lithography to construct four electrode contacts on the graphene, then used lithography to fabricate devices consisting of parallel nanoribbons of widths ranging between 18 and 52 nanometers. The three-dimensional resistivity of the nanoribbons on 18 different devices was then measured using standard analytical techniques at room temperature.

The best of the graphene nanoribbons showed conductivity equal to that predicted for copper interconnects of the same size. Because the comparisons were between non-optimized graphene and optimistic estimates for copper, they suggest that performance of the new material will ultimately surpass that of the traditional interconnect material, Murali said.

“Even graphene samples of moderate quality show excellent properties,” he explained. “We are not using very high levels of optimization or especially clean processes. With our straightforward processing, we are getting graphene interconnects that are essentially comparable to copper. If we do this more optimally, the performance should surpass copper.”

Though one of graphene’s key properties is reported to be ballistic transport—meaning electrons can flow through it without resistance—the material’s actual conductance is limited by factors that include scattering from impurities, line-edge roughness and from substrate phonons—vibrations in the substrate lattice.

Use of graphene interconnects could help facilitate continuing increases in integrated circuit performance once features sizes drop to approximately 20 nanometers, which could happen in the next five years, Murali said. At that scale, the increased resistance of copper interconnects could offset performance increases, meaning that without other improvements, higher density wouldn’t produce faster integrated circuits.

“This is not a roadblock to achieving scaling from one generation to the next, but it is a roadblock to achieving increased performance,” he said. “Dimensional scaling could continue, but because we would be giving up so much in terms of resistivity, we wouldn’t get a performance advantage from that. That’s the problem we hope to solve by switching to a different materials system for interconnects.”

Nanotech testing heart disease

Release number three in the dump, this covers a medical application of nanotech.

The release:

Researchers test nanoparticle to treat cardiovascular disease in mice

IMAGE: This is an image of a multifunctional micelle designed by research team.

Click here for more information. 

(Santa Barbara, Calif.) –– Scientists and engineers at UC Santa Barbara and other researchers have developed a nanoparticle that can attack plaque –– a major cause of cardiovascular disease. The new development is described in a recent issue of the Proceedings of the National Academy of Sciences.

The treatment is promising for the eventual development of therapies for cardiovascular disease, which is blamed for one third of the deaths in the United States each year. Atherosclerosis, which was the focus of this study, is one of the leading causes of cardiovascular disease. In atherosclerosis, plaque builds up on the walls of arteries and can cause heart attack and stroke.

IMAGE: Matthew Tirrell is a researcher at University of California – Santa Barbara.

Click here for more information. 

“The purpose of our grant is to develop targeted nanoparticles that specifically detect atherosclerotic plaques,” said Erkki Ruoslahti, distinguished professor at the Burnham Institute for Medical Research at the University of California, Santa Barbara. “We now have at least one peptide, described in the paper, that is capable of directing nanoparticles to the plaques.”

The nanoparticles in this study are lipid-based collections of molecules that form a sphere called a micelle. The micelle has a peptide, a piece of protein, on its surface, and that peptide binds to the surface of the plaque.

Co-author Matthew Tirrell, The Richard A. Auhll Professor and dean of UCSB’s College of Engineering, specializes in lipid-based micelles. “This turned out to be a perfect fit with our targeting technology,” said Ruoslahti.

To accomplish the research, the team induced atherosclerotic plaques in mice by keeping them on a high-fat diet. They then intravenously injected these mice with the micelles, which were allowed to circulate for three hours.

IMAGE: Erkki Ruoslahti is a researcher at University of California – Santa Barbara.

Click here for more information. 

“One important element in what we did was to see if we could target not just plaques, but the plaques that are most vulnerable to rupture,” said Ruoslahti. “It did seem that we were indeed preferentially targeting those places in the plaques that are prone to rupture.”

The plaques tend to rupture at the “shoulder,” where the plaque tissue meets the normal tissue. “That’s also a place where the capsule on the plaque is the thinnest,” said Ruoslahti. “So by those criteria, we seem to be targeting the right places.”

Tirrell added:”We think that self-assembled micelles (of peptide amphiphiles) of the sort we have used in this work are the most versatile, flexible nanoparticles for delivering diagnostic and therapeutic biofunctionality in vivo. The ease with which small particles, with sufficiently long circulation times and carrying peptides that target and treat pathological tissue, can be constructed by self-assembly is an important advantage.”

Ruoslahti said that UCSB’s strength in the areas of materials, chemistry, and bioengineering facilitated this research. He noted that he and Tirrell have been close collaborators.

 

###

 

The work was funded by a grant from the National Heart, Lung and Blood Institute of the National Institutes of Health.

In addition to Ruoslahti and Tirrell, the article, “Targeting Atherosclerosis Using Modular, Multifunctional Micelles,” was authored by David Peters of the Burnham Institute at UCSB and the Biomedical Sciences Graduate Group at UC San Diego; Mark Kastantin of UCSB’s Department of Chemical Engineering; Venkata Ramana Kotamraju of the Burnham Institute at UCSB; Priya P. Karmali of the Cancer Research Center, Burnham Institute for Medical Research in La Jolla; and Kunal Gujraty of the Burnham Institute at UCSB.

June 4, 2009

Nanoscale zipper cavities

More nanotech news.

The release:

Caltech scientists create nanoscale zipper cavity that responds to single photons of light

Device could be used for highly sensitive force detection, optical communications and more

IMAGE: Scanning electron microscope image of an array of “zipper ” optomechanical cavities. The scale and sensitivity of the device is set by its physical mass (40 picograms/40 trillionths of a gram)…

Click here for more information. 

PASADENA, Calif.—Physicists at the California Institute of Technology (Caltech) have developed a nanoscale device that can be used for force detection, optical communication, and more. The device exploits the mechanical properties of light to create an optomechanical cavity in which interactions between light and motion are greatly strengthened and enhanced. These interactions, notes Oskar Painter, associate professor of applied physics at Caltech, and the principal investigator on the research, are the largest demonstrated to date.

The device and the work that led to it are described in a recent issue of the journal Nature.

The fact that photons of light, despite having no mass, nonetheless carry momentum and can interact with mechanical objects is an idea that dates back to Kepler and Newton. The mechanical properties of light are also known to limit the precision with which one can measure an object’s position, since simply by using light to do the measurement, you apply a force and disturb the object.

It was important to consider these so-called back-action effects in the design of devices to measure weak, classical forces. Such considerations were part of the development of gravity-wave detectors like the Laser Interferometer Gravitational-Wave Observatory (LIGO). These sorts of interferometer-based detectors have also been used at much smaller scales, in scanning probe instruments used to detect or image atomic surfaces or even single electron spins.

To get an idea of how these systems work, consider a mirror attached to a floppy cantilever, or spring. The cantilever is designed to respond to a particular force—say, a magnetic field. Light shining down on the mirror will be deflected when the force is detected—i.e., when the cantilever moves—resulting in a variation in the light beam’s intensity that can then be detected and recorded.

“LIGO is a huge multikilometer-scale interferometer,” notes Painter. “What we did was to take that and scale it all the way down to the size of the wavelength of light itself, creating a nanoscale device.”

They did this, he explains, because as these interferometer-based detectors are scaled down, the mechanical properties of light become more pronounced, and interesting interactions between light and mechanics can be explored.

“To this end, we made our cantilevers many, many times smaller, and made the optical interaction many, many times larger,” explains Painter.

They call this nanoscale device a zipper cavity because of the way its dual cantilevers—or nanobeams, as Painter calls them—move together and apart when the device is in use. “If you look at it, it actually looks like a zipper,” Painter notes.

“Zipper structures break new ground on coupling photonics with micromechanics, and can impact the way we measure motion, even into the quantum realm,” adds Kerry Vahala, Caltech’s Ted and Ginger Jenkins Professor of Information Science and Technology and professor of applied physics, and one of the paper’s authors. “The method embodied in the zipper design also suggests new microfabrication design pathways that can speed advances in the subject of cavity optomechanics as a whole.”

To create their zipper cavity device, the researchers made two nanobeams from a silicon chip, poking holes through the beams to form an effective optical mirror. Instead of training a light down onto the nanobeams, the researchers used optical fibers to send the light “in plane down the length of the beams,” says Painter. The holes in the nanobeams intercept some of the photons, circulating them through the cavity between the beams rather than allowing them to travel straight through the device.

Or, to be more precise, the circulating photons actually create the cavity between the beams. As Painter puts it: “The mechanical rigidity of the structure and the changes in its optical response are predominantly governed by the internal light field itself.”

Such an interaction is possible, he adds, because the structure is precisely designed to maximize the transfer of momentum from the input laser’s photons to the mechanical nanobeams. Indeed, a single photon of laser light zipping through this structure produces a force equivalent to 10 times that of Earth’s gravity. With the addition of several thousand photons to the cavity, the nanobeams are effectively suspended by the laser light.

Changes in the intensity and other properties of the light as it moves along the beams to the far end of the chip can be detected and recorded, just as with any large-scale interferometer.

The potential uses for this sort of optomechanical zipper cavity are myriad. It could be used as a sensor in biology by coating it with a solution that would bind to, say, a specific protein molecule that might be found in a sample. The binding of the protein molecule to the device would add mass to the nanobeams, and thus change the properties of the light traveling through them, signaling that such a molecule had been detected. Similarly, it could be used to detect other ultrasmall physical forces, Painter adds.

Zipper cavities could also be used in optical communications, where circuits route information via optical beams of different colors, i.e., wavelengths. “You could control and manipulate what the optical beams of light are doing,” notes Painter. “As the optical signals moved around in a circuit, their direction or color could be manipulated via other control light fields.” This would create tunable photonics, “optical circuits that can be tuned with light.”

Additionally, the zipper cavity could lead to applications in RF-over-optical communications and microwave photonics as well, where a laser source is modulated at microwave frequencies, allowing the signals to travel for kilometers along optical fibers. In such systems, the high-frequency mechanical vibrations of the zipper cavity could be used to filter and recover the RF or microwave signal riding on the optical wave.

 

###

 

Other authors on the Nature paper, “A picogram- and nanometre-scale photonic-crystal optomechanical cavity,” include graduate students Matt Eichenfield (the paper’s first author) and Jasper Chan, and postdoctoral scholar Ryan Camacho.

Their research was supported by a Defense Advanced Research Projects Agency seeding effort, and an Emerging Models and Technologies grant from the National Science Foundation.

Photon driven nanomotor

Fair warning to all readers, a major press release dump is coming. Mostly nanotechnology news.

First up is research on a molecular nanomotor driven by light.

The release:

New, light-driven nanomotor is simpler, more promising, scientists say

GAINESVILLE, Fla. — Sunflowers track the sun as it moves from east to west. But people usually have to convert sunlight into electricity or heat to put its power to use.

Now, a team of University of Florida chemists is the latest to report a new mechanism to transform light straight into motion – albeit at a very, very, very tiny scale.

In a paper expected to appear soon in the online edition of the journal Nano Letters, the UF team reports building a new type of “molecular nanomotor” driven only by photons, or particles of light. While it is not the first photon-driven nanomotor, the almost infinitesimal device is the first built entirely with a single molecule of DNA — giving it a simplicity that increases its potential for development, manufacture and real-world applications in areas ranging from medicine to manufacturing, the scientists say.

“It is easy to assemble, has fewer parts and theoretically should be more efficient,” said Huaizhi Kang, a doctoral student in chemistry at UF and the first author of the paper.

The scale of the nanomotor is almost vanishingly small.

In its clasped, or closed, form, the nanomotor measures 2 to 5 nanometers — 2 to 5 billionths of a meter. In its unclasped form, it extends as long as 10 to 12 nanometers. Although the scientists say their calculations show it uses considerably more of the energy in light than traditional solar cells, the amount of force it exerts is proportional to its small size.

But that won’t necessarily limit its potential.

In coming years, the nanomotor could become a component of microscopic devices that repair individual cells or fight viruses or bacteria. Although in the conceptual stage, those devices, like much larger ones, will require a power source to function. Because it is made of DNA, the nanomotor is biocompatible. Unlike traditional energy systems, the nanomotor also produces no waste when it converts light energy into motion.

“Preparation of DNA molecules is relatively easy and reproducible, and the material is very safe,” said Yan Chen, a UF chemistry doctoral student and one of the authors of the paper.

Applications in the larger world are more distant. Powering a vehicle, running an assembly line or otherwise replacing traditional electricity or fossil fuels would require untold trillions of nanomotors, all working together in tandem — a difficult challenge by any measure.

“The major difficulty lies ahead,” said Weihong Tan, a UF professor of chemistry and physiology, author of the paper and the leader of the research group reporting the findings. “That is how to collect the molecular level force into a coherent accumulated force that can do real work when the motor absorbs sunlight.”

Tan added that the group has already begun working on the problem.

“Some prototype DNA nanostructures incorporating single photo-switchable motors are in the making which will synchronize molecular motions to accumulate forces,” he said.

To make the nanomotor, the researchers combined a DNA molecule they created in the lab with azobenzene, a chemical compound that responds to light. A high-energy photon prompts one response; lower energy another.

To demonstrate the movement, the researchers attached a fluorophore, or light-emitter, to one end of the nanomotor and a quencher, which can quench the emitting light, to the other end. Their instruments recorded emitted light intensity that corresponded to the motor movement.

“Radiation does cause things to move from the spinning of radiometer wheels to the turning of sunflowers and other plants toward the sun,” said Richard Zare, distinguished professor and chairman of chemistry at Stanford University. “What Professor Tan and co-workers have done is to create a clever light-actuated nanomotor involving a single DNA molecule. I believe it is the first of its type.”

 

###

 

The National Institutes of Health and the National Science Foundation funded the research. The other coauthors of this paper are Haipeng Liu, Joseph A. Phillips, Zehui Cao, Youngmi Kim, Zunyi Yang and Jianwei Li.

Commercial leasing 101 for small business

Filed under: Business — Tags: , , , — David Kirkpatrick @ 3:05 pm

Here’s a great set of tips from Business.com for small businesses about to enter into a commercial real estate lease.

From the link, the first tip:

Go for free rent
A typical commercial lease rate is based on a cost per square foot. But in a “buyers market” for retail space — when there are a lot of openings available — landlords commonly will offer free rent for a certain number of months as an enticement. They’d much rather do this than lower the cost per square foot because it allows them to show the higher lease rate on their books, and lock in that all-important rate for the future. It also allows them to tell other prospective tenants that space has been rented at a higher cost per square foot, even if the “effective” rate is lower due to the free rent.

Economic indicators not behaving

News stories like this perfectly illustrate why I remain skeptical of any rosy near-term predictions. I honestly think the media is doing the general public a great disservice with the “doom,” “doom,” “doom” and then, “it’s all going to be great!,” followed soon after with, “alas, woe is us.” The news cycles leave people tired, confused and probably distrustful of most things they hear.

The fact is we remain in fairly uncharted territory and even though things are looking a little better in that everything is no longer in a free fall, many things can happen to really crater the global economy. Let’s just say we are not on solid footing by any measure right now. Any report to the contrary is just blowing smoke.

Instead of worrying over all this bipolar news, the best bet is to keep the stiff upper lip, chin up approach and just stay alert to the facts and conditions on the ground.

From the link:

The nation’s service sector shrank in May at the slowest pace since late last year. And factory orders rose in April. But the improvements fell short of economists’ expectations and disappointed investors, who sent stocks lower.

Economic reports earlier this week on home sales and manufacturing had been encouraging, but Wednesday’s figures sent a reminder that the economy remains sluggish.

“People assumed it was safe to go back outdoors, but it’s still raining,” said David Wyss, chief economist at Standard & Poor’s. “It’s just not raining quite as hard.”

The Institute for Supply Management said its services index registered 44 in May, up slightly from 43.7 in April. It was the highest reading since October. Service industries such as retailers, financial services, transportation and health care make up about 70 percent of U.S. economic activity.

But the ISM figure marked its eighth straight monthly decline, and it fell slightly below economists’ expectations. Any reading below 50 indicates the services sector is shrinking. The last time the index was at 50 or higher was in September.

Separately, the government reported that orders to U.S. factories rose 0.7 percent in April, the second increase in three months.

But the Commerce Department’s report fell short of analysts’ expectations. And the government also marked down the March figure to a 1.9 percent drop, from the 0.9 percent decline previously reported.

Wall Street fell after the disappointing figures were released. The Dow Jones industrial average dropped more than 65 points to 8,675.24. Broader averages also declined.

Federal Reserve Chairman Ben Bernanke, meanwhile, said Wednesday that the economy will begin growing later this year, but the improvement will be slight.

“We expect that the recovery will only gradually gain momentum,” he told lawmakers. “Businesses are likely to be cautious about hiring, and the unemployment rate is likely to rise for a time, even after economic growth resumes.”

The current recession, the longest since World War II, began after the bursting of the housing bubble led to a financial crisis last fall. Economists say recoveries after such crises tend to be slower, as credit remains tight even after growth returns.

Huxley: The Dystopia in beta

Filed under: Business, Media, Technology — Tags: , , , , , — David Kirkpatrick @ 2:30 pm

Don’t have extra knowledge about this game and it’s not really my style, but this release came across the inbox and just looked interesting for some reason.

If this sounds like your cup of tea hit the link below to do some beta-testing.

So here you go — the release:

ijji.com Selects FilePlanet to Facilitate Closed Beta for Huxley: The Dystopia

Leading North American Online Games Publisher Enlists IGN Download Site for Beta Signups and Key Distribution

IRVINE, Calif. and SAN FRANCISCO, June 4 /PRNewswire/ — ijji.com, the premier seven million members-strong online destination for hardcore gamers, has partnered with IGN’s FilePlanet to encourage beta signups and distribute beta keys for its highly anticipated massively multi-player online first-person shooter (MMOFPS) Huxley: The Dystopia.  The Unreal Engine 3-powered game combines the twitch-based fast-action game play of a FPS with the cerebral character development and persistent city of an MMO, in a gritty adventure through a post-apocalyptic world.  The FilePlanet beta signup page can be found online at www.fileplanet.com/promotions/huxley/beta/.

The partnership between ijji.com and FilePlanet.com – one of the most visited video game download sites on the internet – marks the first time players will be able to access the long awaited online game, as well as a strong alliance between two industry greats.  ijji.com brings expertise in launching quality and targeted hardcore titles to online gamers, while FilePlanet possesses a stellar track record in reaching those audiences.

In addition to the participants to whom ijji.com granted beta access, FilePlanet will begin selecting additional testers from its own community members to receive beta keys to Huxley: The Dystopia on June 3.  The closed beta starts on the same day and all those with keys are welcome to enter the game and see the world of Huxley for the first time.  This exclusive closed beta runs through June 14, and is open from 2:00 PM to 10:00 PM PDT, daily.  The first stages of closed beta testing will feature the game’s character creation and customization, and the player-versus-player (PvP) gameplay.  Additional features, such as player-versus-environment (PvE), will debut in subsequent testing periods.

“This collaboration will bring ijji.com an expanded reach to players that are sure to enjoy being among the first to see and experience the long-awaited Huxley: The Dystopia,” said Philip Yun, CEO, NHN USA, which hosts ijji.com.  “IGN’s media properties and online technology services are premier in their respective industries, and we are honored to align ourselves with such a proven team.”

The first of its kind, Huxley: The Dystopia complements its unique blend of game play with an engrossing story.  In the not so distant future, a huge swarm of Nuclearite clouds passing through our solar system shatter the Earth’s moon.  Lunar fragments mingled with the radioactive Nuclearites shower Earth with debris, catapulting the World into the greatest upheaval in history.  In this post-apocalyptic world human beings have mutated and divided into Sapiens, Alternatives and Hybrids.  The Sapiens and Alternatives both battle for control of the Earth’s resources as the Hybrids, a crossbreed third race, fight for their very survival.  Huxley fans, game enthusiasts and anyone curious to find out more can check out the teaser site at http://huxley.ijji.com/.  For more information on ijji.com, go to www.ijji.com.

About ijji.com and NHN USA

ijji.com (www.ijji.com) is the leading portal for hardcore online gaming.  Owned and operated by Irvine, Calif.-based NHN USA, Inc., ijji.com launched in 2006 and now boasts more than eight million unique registered gamers.  The portal hosts a diverse suite of hardcore, fast action free-to-play online games, each with an optional micro-currency model.  ijji.com’s extensive game portfolio includes GUNZ The Duel(R), SOLDIER FRONT(TM), GUNBOUND(R) Revolution, DRIFT CITY(TM), LUMINARY Rise of the GoonZu(TM) and MINING BOY(TM), and will soon include the highly-anticipated Unreal Engine 3-based MMOFPS game Huxley: The Dystopia and the fantasy MMORPG Soul of the Ultimate Nation.

About FilePlanet

FilePlanet is a video game download service that provides free game patches, mod files and media downloads (game demos, trailers) to its users. FilePlanet was launched and is run by GameSpy, a subsidiary of IGN Entertainment, and is one of the most visited video game download sites on the Internet. For more information, visit www.fileplanet.com.

About IGN Entertainment

IGN Entertainment, a unit of Fox Interactive Media, Inc., is a leading Internet media and services provider focused on the videogame and entertainment enthusiast markets. Collectively, IGN’s properties reached more than 35 million unique users worldwide in the month of February 2009, according to Internet audience measurement firm comScore Media Metrix. IGN’s network of videogame-related properties (IGN.com, GameSpy, FilePlanet, TeamXbox, Direct2Drive and others), is one of the Web’s leading videogame information destinations and attracts one of the largest concentrated audiences of young males on the Internet. IGN also owns and operates the popular movie-related website, Rotten Tomatoes and one of the leading male lifestyle Websites, AskMen.com. In addition, IGN provides technology for online game play in videogames. IGN is headquartered in the San Francisco Bay Area, with offices throughout the U.S. and in Montreal.

Source: FilePlanet
   

Web Site:  http://www.fileplanet.com/
http://www.ijji.com/

Economic downturn is lowering software prices

Filed under: Business, Technology — Tags: , , , — David Kirkpatrick @ 1:59 pm

The global recession hasn’t lowered prices in all categories, but it looks like software is one sector pressed to get lower. In a time of limited spending for many firms, I’d say pushing dollars toward the IT budget might be a good idea.

From the link:

The worldwide recession has hammered IT budgets but has also prompted vendors to make their software pricing and licensing models more customer-friendly, according to a new Forrester Research report.Forrester’s report looked at how 12 enterprise software vendors’ pricing and licensing strategies changed in the fourth quarter of 2008 and the first quarter of this year.

Easily the most dramatic change was SAP’s recent, well-publicized agreement with user groups around KPIs (key performance indicators) to prove the value of its fuller-featured but more expensive Enterprise Support service.

“The SAP change hasn’t really caused the market to go and say, we’ll do KPIs now,” said the report’s author, analyst Ray Wang.

But overall, “it is truly a buyer’s market,” he said. “We’re really seeing vendors being a lot more accommodating, especially with new customers.”

Looking into cells with nanotech

Filed under: Science, Technology — Tags: , , , , — David Kirkpatrick @ 1:49 pm

Via KurzweilAI.net — Just wow.

Revolutionary Ultrasonic Nanotechnology May Allow Scientists To See Inside Patient’s Individual Cells
Science Daily, June 3, 2009

Nanoscale GHz-range ultrasonic technology that could allow scientists to see inside a patient’s individual cells to help diagnose serious illnesses is being developed by researchers at The University of Nottingham.

The new technology may also allow scientists to see objects even smaller than optical microscopes and be so sensitive they may be able to measure single molecules.

 
Read Original Article>>

« Newer PostsOlder Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 26 other followers