David Kirkpatrick

January 26, 2010

Optical computing breakthrough

Filed under: Science, Technology — Tags: , , , , , — David Kirkpatrick @ 3:06 pm

Via KurzweilAI.net — It really is fun watching to see where the next big advancement in computing comes from. Optical computers, quantum computers, something we haven’t even heard of yet? One thing is certain, computers will continue to become more and more powerful for the foreseeable future.

Spasers set to sum: A new dawn for optical computing
New Scientist Tech, Jan. 25, 2010

The “spaser,” the latest by-product of a buzzing field known as nanoplasmonics, based on plasmons, may lead to building a super-fast computer that computes with light.

Plasmons, which are ultra-high-frerquency electron waves on a metallic surface, overcome the speed limits of the wires that interconnect transistors in chips, allowing for converting electronic signals into photonic ones and back again with speed and efficiency.
Read Original Article>>

October 9, 2009

Petaflop academic supercomputer

Now that’s fast.

The release:

Kraken becomes first academic machine to achieve petaflop

IMAGE: This is the newly upgraded Kraken supercomputer, capable of a peak performance of more than one petaflop

Click here for more information.

The National Institute for Computational Sciences’ (NICS’s) Cray XT5 supercomputer—Kraken—has been upgraded to become the first academic system to surpass a thousand trillion calculations a second, or one petaflop, a landmark achievement that will greatly accelerate science and place Kraken among the top five computers in the world.

Managed by the University of Tennessee (UT) for the National Science Foundation (NSF), the system came online Oct. 5 with a peak performance of 1.03 petaflops. It features more than 16,000 six-core 2.6-GHz AMD Istanbul processors with nearly 100,000 compute cores.

In addition, an upgrade to 129 terabytes of memory (the equivalent of more than 13 thousand movies on DVD) effectively doubles the size of Kraken for researchers running some of the world’s most sophisticated 3-D scientific computing applications. Simulation has become a key tool for researchers in a number of fields, from climate change to materials.

“At over a petaflop of peak computing power, and the ability to routinely run full machine jobs, Kraken will dominate large-scale NSF computing in the near future,” said NICS Project Director Phil Andrews. “Its unprecedented computational capability and total available memory will allow academic users to treat problems that were previously inaccessible.”

For example, understanding the mechanism behind the explosion of core-collapse supernovas will reveal much about our universe (these cataclysmic events are responsible for more than half the elements in the universe). Essentially three phenomena are being simulated to explore these explosions: hydrodynamics, nuclear burning or fusion, and neutrino transport, said UT astrophysicist Bronson Messer.

At the terascale, or trillions of calculations per second, Messer and his team were forced to simulate the star in 1-D as a perfect sphere and with unrealistic fusion physics. “Now, however, we are getting closer to physical reality,” said Messer. “With petascale capability, we can simulate all three phenomena simultaneously with significant realism. This brings us closer to understanding the explosion mechanism and being able to make meaningful predictions.”

From the physical makeup of the universe to the causes of global warming to the roles of proteins in disease, Kraken’s increased computing muscle will reach far and wide.

As the main computational resource for NICS, the new system is linked to the NSF-supported TeraGrid, a network of supercomputers across the country that is the world’s largest computational platform for open scientific research.

The system and the resulting NICS organization are the result of an NSF Track 2 award of $65 million to the University of Tennessee and its partners to provide for next-generation high-performance computing (HPC). The award was won in an open competition among HPC institutions vying to guarantee America’s continued competitiveness through the next generation of supercomputers (systems greater than 10 teraflops and into the petascale).

“While reaching the petascale is a remarkable achievement in itself, the real strides will be made in the new science that petascale computing will enable,” said Thomas Zacharia, NICS principal investigator, professor in electrical and computer engineering at the University of Tennessee and deputy director for science and technology at Oak Ridge National Laboratory. “Kraken is a game changer for research.”

###

February 12, 2009

February 2009 media tips from Oak Ridge National Laboratory

The release:

February 2009 Story Tips

(Story Tips Archive)

Story ideas from the Department of Energy’s Oak Ridge National Laboratory. To arrange for an interview with a researcher, please contact the Communications and External Relations staff member identified at the end of each tip.

MICROSCOPY—-STEM in liquid . . . . . .

Researchers at ORNL and Vanderbilt University have unveiled a new technique for imaging whole cells in liquid using a scanning transmission electron microscope. Electron microscopy is the most important tool for imaging objects at the nano-scale–the size of molecules and objects in cells. But electron microscopy requires a high vacuum, which has prevented imaging of samples in liquid, such as biological cells.” The new technique – liquid STEM – uses a micro-fluidic device with electron transparent windows to enable the imaging of cells in liquid. A team led by Niels de Jonge imaged individual molecules in a cell, with significantly improved resolution and speed compared with existing imaging methods. “Liquid STEM has the potential to become a versatile tool for imaging cellular processes on the nanometer scale,” said de Jonge. “It will potentially be of great relevance for the development of molecular probes and for the understanding of the interaction of viruses with cells.” The work was recently described in the on-line Proceedings of the National Academy of Sciences.

BIOLOGY—-Time-saving tool . . . . . .

Scientists studying human health, agriculture and the environment have a powerful new tool to help them better understand microbial processes and how they relate to ecosystems. The GeoChip consolidates into one analysis something that using traditional methods would require dozens of tests and take possibly years to complete, according to co-developer Chris Schadt of ORNL’s Biosciences Division. This lab on a chip features more than 24,000 gene probes that target more than 150 functional gene groups involved in biochemical, ecological and environmental processes. The GeoChip is especially useful for bioremediation of sediments and soils, determining the role of microbes in soil and learning how microbial processes are connected to ecosystem responses to human-induced environmental changes such as temperature, moisture and carbon dioxide. This research was funded by the Department of Energy’s Office of Biological and Environmental Research.

 

CYBERSPACE—-Thwarting threats . . . . . .

Colonies of cyber robots with unique missions can in near real time detect network intruders on computers that support U.S. infrastructure. These “cybots” created for an ORNL software program called UNTAME (Ubiquitous Network Transient Autonomous Mission Entities) may be especially useful for helping government agencies deter, defend, protect against and defeat cyber-attacks. “What scares us the most isn’t what we can see, but rather what we can’t see,” said Joe Trien of the lab’s Computational Sciences & Engineering Division. “A coordinated cyber attack could disrupt one or more of U.S. critical infrastructures, and these attacks can reach across the world at the speed of light.” Trien led a team of researchers that developed UNTAME.

 

COMPUTING—-First petascale projects . . . . . .

The National Center for Computational Sciences at Oak Ridge National Laboratory has granted early access to a number of projects to test Jaguar, which has peak performance of 1.6 petaflops and is the most powerful computer in the world for open science. The “Petascale Early Science” period will run approximately 6 months and consist initially of 20 projects, said NCCS Director of Science Doug Kothe. The early phase period seeks to deliver high-impact science results and advancements; harden the system for production; and embrace a broad user community to use the system, Kothe said. Proposals include: modeling to better understand climate change; energy storage and battery technology; cellulose conversion to ethanol; combustion research for more efficient automobile engines; and high-temperature superconductors for more efficient transmission of electricity. Fusion, nuclear energy, materials science, nuclear physics, astrophysics, and carbon sequestration also will be explored. “These early simulations on Jaguar will also help us harden the system for a broader collection of projects later in the year,” said Kothe.

October 16, 2008

Creating an exaFLOPS supercomputer?

From KurzweilAI.net — Yowza! A one exaflop supercomputer would be one monster machine.

Opinion – Reaching for the Exa-scale, by BOINC-ing
iSGTW, Oct. 15, 2008How do you create a 1 exaFLOPS supercomputer (1000 times faster than the current leader, the 1 petaFLOPS Roadrunner supercomputer)?
(BOINC)

By creating a grid of 4 million volunteers with 1 teraFLOPS GPU-equipped PCs, available an average of 25% of the time, says David Anderson, founder of the popular volunteer computing site known as BOINC (Berkeley Open Infrastructure for Network Computing), which he estimates could happen in 2010 — years ahead of other paradigms.*

GPU architectures have steadily become more general-purpose, and the latest models do double-precision floating-point math. In 2007, NVIDIA released a system called CUDA that allows GPUs to be programmed in the C language, , making it much easier for scientists to develop and port applications to run on GPUs.

* Based on the current speed trend, KurzweilAI.net projects that supercomputer speeds would hit 1 exaFLOPS in 2018.

 
Read Original Article>>

September 25, 2008

Quantum computing and more

We’re getting closer to harnessing quantum mechanics to create supercomputers and other devices.

From the link:

The brave new world of quantum technology may be a big step closer to reality thanks to a team of University of Calgary researchers that has come up with a unique new way of testing quantum devices to determine their function and accuracy. Their breakthrough is reported in today’s edition of Science Express, the advanced online publication of the prestigious journal Science.

“Building quantum machines is difficult because they are very complex, therefore the testing you need to do is also very complex,” said Barry Sanders, director of the U of C’s Institute for Quantum Information Science and a co-author of the paper. “We broke a bunch of taboos with this work because we have come up with an entirely new way of testing that is relatively simple and doesn’t require a lot of large and expensive diagnostic equipment.”

Similar to any electronic or mechanical device, building a quantum machine requires a thorough understanding of how each part operates and interacts with other parts if the finished product is going to work properly. In the quantum realm, scientists have been struggling to find ways to accurately determine the properties of individual components as they work towards creating useful quantum systems. The U of C team has come up with a highly-accurate method for analyzing quantum optical processes using standard optical techniques involving lasers and lenses.

April 4, 2008

Red pill or blue pill?

Filed under: Technology — Tags: , , , — David Kirkpatrick @ 2:08 pm

From KurzweilAI.net:

Matrix-style virtual worlds ‘a few years away’
New Scientist news service, April 3, 2008Are supercomputers on the verge of creating Matrix-style simulated realities?

Michael McGuigan at Brookhaven National Laboratory thinks so, and has used the Lab’s Blue Gene/L supercomputer to generate a photorealistic, real-time artificial world. He found that conventional ray-tracing software could run 822 times faster on the Blue Gene/L than on a standard computer, allowing it to convincingly mimic natural lighting in real time.

The ultimate objective is to pass the “Graphics Turing Test,” in which a human judge viewing and interacting with an artificially generated world should be unable to reliably distinguish it from reality.

He believes that should be possible in the next few years, once supercomputers enter the petaflop range, along with parallel computing.
Read Original Article>>

March 26, 2008

Silicon photonics tech

Filed under: Science, Technology — Tags: , , , , — David Kirkpatrick @ 2:35 am

From KurzweilAI.net:

Replacing Wire With Laser, Sun Tries to Speed Up Data
New York Times, Mar. 24, 2008Sun Microsystems is planning to announce that is has received a $44 million contract from the Pentagon to explore the high-risk idea of replacing the wires between computer chips with laser beams.The “silicon photonics” technology would eradicate the most daunting bottleneck facing today’s supercomputer designers: moving information rapidly to solve problems that require hundreds or thousands of processors.
Read Original Article>>

February 25, 2008

Exaflop computer in planning stage

More news from the KurzweilAI.net newsletter:

Wow!

‘Exaflop’ Supercomputer Planning Begins
Information Week, Feb. 22, 2008

Researchers at Sandia and Oak Ridge National Laboratories have launched the Institute for Advanced Architectures to do basic research on issues such as power consumption and reliability for an exaflop (10^18 floating point operations per second) system that could have a million hundred-core processors.

The U.S. Department of Energy and the National Security Agency expect to need exaflop computing by 2018 for large-scale prediction, materials science analysis, fusion research, and national security problems.
Read Original Article>>