Tuesday, October 14, 2008

The Fourth Dimension as Space

Sometimes, the fourth dimension is interpreted in the spatial sense: a space with literally 4 spatial dimensions, 4 mutually orthogonal directions of movement. This is the space used by mathematicians when studying geometric objects such as 4-dimensional polytopes. To avoid confusion with the more common Einsteinian notion of time being the fourth dimension, however, the use of this spatial interpretation should be stated at the outset.

Mathematically, the 4-dimensional spatial equivalent of conventional 3-dimensional geometry is the Euclidean 4-space, a 4-dimensional normed vector space with the Euclidean norm. The "length" of a vector

 \mathbf{x} = (p, q, r, s)

expressed in the standard basis is given by

 \| \mathbf{x} \| = \sqrt{p^{2} + q^{2} + r^{2} + s^{2}}

which is the natural generalization of the Pythagorean Theorem to 4 dimensions. This allows for the definition of the angle between two vectors (see Euclidean space for more information).

TIME

Time is often referred to as the "fourth dimension". It is one way to measure physical change. It is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move freely in time but subjectively move in one direction.

The equations used in physics to model reality do not treat time in the same way that humans perceive it. The equations of classical mechanics are symmetric with respect to time, and equations of quantum mechanics are typically symmetric if both time and other quantities (such as charge and parity) are reversed. In these models, the perception of time flowing in one direction is an artifact of the laws of thermodynamics (we perceive time as flowing in the direction of increasing entropy).

The best-known treatment of time as a dimension is Poincaré and Einstein's special relativity (and extended to general relativity), which treats perceived space and time as components of a four-dimensional manifold, known as spacetime, and in the special, flat case as Minkowski space.

Sunday, October 5, 2008





A California-based company founded in 2003,Artificial Development, is developing neural network cognitive systems and wants to introduce "the first 5th Generation computer to the world." According to e4engineering.com, the company recently completed a representation of a functioning human brain. This project, named CCortex, has vast ambitions. The company hopes that their "software may have immediate applications for data mining, network security, search engine technologies and natural language processing." This software runs on a Linux cluster with 1,000 processors and the CCortex system has 20 billion neurons and 20 trillion connections. The company says this is "the first neural system to achieve a level of complexity rivaling that of the mammalian brain."

The CCortex cluster Here is a picture of the CCortex Linux cluster which has 500 nodes for 1,000 processors, 1 terabyte of RAM, and 200 terabytes of storage (Credit: Artificial Development). The company says that "CCortex is up to 10,000 times larger than any previous attempt to replicate, partially or completely, primary characteristics of human intelligence."

Here is the status of the project.

The first CCortex-based Autonomous Cognitive Model ('ACM') computer 'persona,' named 'Kjell' in homage to AI pioneer Alan Turing, was activated last month and is in early testing stages. CCortex, Artificial Development's high-performance, parallel supercomputer, runs the persona simulation.
The ACM is intended as a test-bed for future models and is still incomplete. While the Kjell persona uses a realistic frontal cortex, motor and somatosensory areas, it still lacks the visual and auditory cortex areas, two of the most important cortical structures. Other structures, such as the hippocampus, basal ganglia and thalamic systems, are still being developed and are unable to perform most normal functions.

How does this work?

The ACM interacts with trainers using a text console, reading trainer's input and writing answers back, similar to a conventional 'chat' program. The ACM is being trained with a stimulus-reward learning process, based on classical conditioning rules. It is encouraged to respond to simple text commands, associating previous input with rewarded responses.
The ACM uses the associative cortex to 'evolve' possible antagonistic responses. Large populations of neurons compete for their own associated response until the strongest group overcomes the others. The 'winner' response is then tested and rewarded or deterred, depending on its validity. The ACM takes into account new experiences and uses them to modify the equilibrium between the responses and the strength of the associate neural path. Thus it creates a new neural status quo with more chances to generate accurate responses.

So CCortex is intended to simulate our brains. Yet the company employs only programmers and mathematicians. It has no neuroscientists, even if it plans to recruit some. I wonder if the company can deliver interesting results without having any specialist of the brain on board.

It's also interesting to note that the company was founded -- and funded -- by a man who made money by managing some of the largest ISPs in Spain, an activity very different from studying the brain.

And a final remark: how did the company managed to get such a short domain name, ad.com? Any clues?

Sources: Artificial Development press release, via e4engineering.com, July 14, 2

Blue Brain

Blue Brain is a project, begun in May 2005, to create a computer simulation of the brain of mammals including the human brain, down to the molecular level.[1] The aim is to study the brain's architectural and functional principles. The project was founded by Henry Markram from the Brain and Mind Institute at the École Polytechnique (EPFL) in Lausanne, Switzerland.[1] and is based on 15 years of experimental data obtained from reverse engineering the microcircuitry of the neocortical column.

The project uses a Blue Gene supercomputer,[1] running a simulation software, the MPI-based Neocortical Simulator (NCS) developed by Phil Goodman, to be combined with Michael Hines's NEURON software. The simulation will not consist of a mere artificial neural network, but will involve much more biologically realistic models of neurons.

The initial goal of the project, completed in December 2006[2], was the simulation of a rat neocortical column, which can be considered the smallest functional unit of the neocortex (the part of the brain thought to be responsible for higher functions such as conscious thought). Such a column is about 2 mm tall, has a diameter of 0.5 mm and contains about 60,000 neurons in humans; rat neocortical columns are very similar in structure but contain only 10,000 neurons (and 108 synapses). Between 1995 and 2005, Markram mapped the types of neurons and their connections in such a column.

In November 2007[3], the project reported the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column.

Now that the column is finished, the project is pursuing two separate goals:

  1. construction of a simulation on the molecular level,[1] which is desirable since it allows to study effects of gene expression;
  2. simplification of the column simulation to allow for parallel simulation of large numbers of connected columns, with the ultimate goal of simulating a whole neocortex (which in humans consists of about 1 million cortical columns.
004; Artificial BBP LOGO350.jpg

The Blue Brain project is the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations.

In July 2005, EPFL and IBM announced an exciting new research initiative - a project to create a biologically accurate, functional model of the brain using IBM's Blue Gene supercomputer. Analogous in scope to the Genome Project, the Blue Brain will provide a huge leap in our understanding of brain function and dysfunction and help us explore solutions to intractable problems in mental health and neurological disease.

At the end of 2006, the Blue Brain project had created a model of the basic functional unit of the brain, the neocortical column. At the push of a button, the model could reconstruct biologically accurate neurons based on detailed experimental data, and automatically connect them in a biological manner, a task that involves positioning around 30 million synapses in precise 3D locations.

In November, 2007, the Blue Brain project reached an important milestone and the conclusion of its first Phase, with the announcement of an entirely new data-driven process for creating, validating, and researching the neocortical column.

Artificial intelligence


Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a world champion.
Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a world champion.

Artificial intelligence (AI) is the intelligence of machines and the branch of computer science which aims to create it.

Major AI textbooks define the field as "the study and design of intelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] John McCarthy, who coined the term in 1956,[3] defines it as "the science and engineering of making intelligent machines."[4]

Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[5] General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of some AI research.[6]

AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, ontology, operations research, economics, control theory, probability, optimization and logic.[7] AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.[8]

Other names for the field have been proposed, such as computational intelligence,[9] synthetic intelligence,[9] intelligent systems,[10] or computational rationality.[11] These alternative names are sometimes used to set oneself apart from the part of AI dealing with symbols (considered outdated by many, see GOFAI) which is often associated with the term “AI” itself.




IBM AIMS TO STIMULATE ARTIFICIAL BRAIN







Most Popular Stories











Ironman COO
American Panic. Asian Confidence
The Economy Is The Story
Intel's Chairman On Innovation
Inside The Forbes 400

NEW YORK - IBM has embarked on a quest for the holy grail of neuroscience--the far-off goal of creating a computer simulation of the human brain.

When the first mammals evolved from reptiles 200 million years ago, one of the biggest changes was inside their heads. Their brain cells were structured together into columns, an innovation that could be repeated like a computer chip to make larger and more powerful minds-- from mice to cats and dogs to humans.

"This was the jump from reptiles to mammals," says Henry Markram, founder of the Brain/Mind Institute at the Ecole Polytechnique Fédérale in Lausanne, Switzerland. "It was like discovering a G5 processor or Pentium 4 and just copying it."

Now, Markram is announcing a collaboration with IBM (nyse: IBM - news - people ) to create a computer simulation of these fundamental neurological units, called neocortical columns. The process will involve building a Blue Gene supercomputer with 8,000 processors that can roar along at 23 trillion operations per second. Each processor will be used to simulate one or two neurons. If finished immediately, the machine would be one of the five fastest supercomputers in the world.

A neurocortical column is a structure half a millimeter in diameter and 2 millimeters long that contains about 60,000 neurons. (The human brain is made of 10 billion neurons.) The columns were discovered by Nobel Prize-winner Torsten Wiesel of Rockefeller University. They remain similar in different mammals, but the human brain is crammed with more of them. It was the need to fit in more columns that forced the human brain into its crinkly, wrinkled shape.

In Switzerland, Markram has put together a large lab dedicated to studying neurocortical columns in animals. His first effort with IBM will be to simulate a single rat neurocortical column. That alone is likely to take several years, as the computer model is rigorously checked in experiments against neurocortical columns taken from rats.

Once it is clear that one column has been simulated, the project will move on to simulating several such columns, again verifying its results by experiments with real brain tissues. Then, it will be possible to create larger simulations. After a decade or more, it may even be possible to create a model of the human brain. Markram and IBM both emphasize that the project would not create artificial intelligence but a way to study how neurons in the brain interact with one another.

"We believe that we will be able to capture the heart of the information process," says Markram. "Not just the column but how the information was formed in memories and retrieved."

The project could lead to new understandings of various diseases such as schizophrenia, autism and attention deficit hyperactivity disorder. Degenerative diseases such as Alzheimer's may turn out to be more difficult to model because they involve the failure of more than just brain cells, Markram says.

For IBM, the project represents one of several initiatives in its Blue Gene program, which involves building supercomputers based on a powerful new computer architecture. The most powerful of these units, at Lawrence Livermore National Laboratory, is 16 times more powerful than the one Markram is using and will be used to simulate the intricate ways that proteins fold--one of biology's big mysteries. Other efforts exist in astrophysics, atmospheric modeling and financial modeling.

Want to track news by this author or about this industry? Forbes Attaché makes it easy. Click here

BLACK HOLE

What is a black hole?

A black hole is a region of spacetime from which nothing can escape, even light.

To see why this happens, imagine throwing a tennis ball into the air. The harder you throw the tennis ball, the faster it is travelling when it leaves your hand and the higher the ball will go before turning back. If you throw it hard enough it will never return, the gravitational attraction will not be able to pull it back down. The velocity the ball must have to escape is known as the escape velocity and for the earth is about 7 miles a second.

As a body is crushed into a smaller and smaller volume, the gravitational attraction increases, and hence the escape velocity gets bigger. Things have to be thrown harder and harder to escape. Eventually a point is reached when even light, which travels at 186 thousand miles a second, is not travelling fast enough to escape. At this point, nothing can get out as nothing can travel faster than light. This is a black hole.

Do they really exist?

It is impossible to see a black hole directly because no light can escape from them; they are black. But there are good reasons to think they exist.

When a large star has burnt all its fuel it explodes into a supernova. The stuff that is left collapses down to an extremely dense object known as a neutron star. We know that these objects exist because several have been found using radio telescopes.

If the neutron star is too large, the gravitational forces overwhelm the pressure gradients and collapse cannot be halted. The neutron star continues to shrink until it finally becomes a black hole. This mass limit is only a couple of solar masses, that is about twice the mass of our sun, and so we should expect at least a few neutron stars to have this mass. (Our sun is not particularly large; in fact it is quite small.)

Stellar black holes, mid-mass black holes, and supermassive black holes.

18 Jun 08
A spiral galaxy located about 12 million light years from Earth.
16 Apr 08



A sample of nine galaxies each containing supermassive black holes in their centers.

A sample of nine galaxies each containing supermassive black holes in their centers.
A sample of nine galaxies each containing supermassive black holes in their centers.
A sample of nine galaxies each containing supermassive black holes in their centers.

A compilation of long observations on the same patch of sky.

03 Jan 07
Astronomers have found a black hole where few thought they could ever exist, inside a globular star cluster.
21 Jun 06
A binary star system consisting of a black hole and a normal star, located about 11,000 light years from Earth.
24 Apr 06



13 Oct 05
Chandra's image of the Galactic Center has provided evidence for a new and unexpected way for stars to form.
23 May 05

How the First Stars Were Born.



A new supercomputer simulation offers the most detailed view yet of how the first stars evolved after the Big Bang.

The model follows the simpler physics that ruled the early universe to see how cold clumps of gas eventually grew into giant star embryos.

"Until you put that physics in the code, you can't evaluate how the first protostars formed," said Lars Hernquist, an astrophysicist at Harvard University whose early-stars model is detailed in this week's issue of the journal Science. His remarks were made Wednesday during a press teleconference.

Mysterious "dark matter" provided the first gravitational impetus for hydrogen and helium gas to start clumping together, Hernquist said. The gas began releasing energy as it condensed, forming molecules from atoms, which further cooled the clump and allowed for even greater condensing.

Unlike previous models, the latest simulation takes this cooling process of "complex radiative transfer" into account, said Nagoya University astrophysicist Naoki Yoshida, who headed up the modeling project.

Eventually gravity could not condense the gas cloud any further, because the densely-packed gas exerted a pressure against further collapse. That equilibrium point marked the beginning of an embryonic star, called a protostar by astronomers.

Simulation runs show that the first protostar likely started with just 1 percent the mass of our sun, but would have swelled to more than 100 solar masses in 10,000 years.

"No simulation has ever gotten to the point of identifying this important stage in the birth of a star," Hernquist noted.

The first protostars reached such massive size because they consisted of mainly simple elements such as hydrogen and helium. That bloated existence means the stars which eventually form from such protostars could create heavier elements such as oxygen, carbon, nitrogen and iron in their fiery furnaces.

The researchers eventually hope to run the simulation all the way up through the point where protostars ignite into true stars.

WORLD

Our Solar System Born in Little Bang'

First there was the theoretical Big Bang that got the universe going. Several billion years passed. Then a Little Bang birthed our solar system.

At least scientists have long thought that's how it went, and now they have a computer model to back up the idea that our sun is the product of an explosive event. The new modeling finds that a supernova, or exploding star, could indeed have triggered birth of our sun in a dense cloud of gas and dust, the researchers say.

Stars are born when a cloud of material collapses. Exactly what triggers the collapse is not entirely known. One idea is that most stars, including perhaps our sun, were created in dense starbirth regions when another very massive star explodes, putting intense pressure on surrounding clouds.

"We've had chemical evidence from meteorites that points to a supernova triggering our solar system's formation since the 1970s," said theorist Alan Boss of the Carnegie Institution. "But the devil has been in the details. Until this study, scientists have not been able to work out a self-consistent scenario, where collapse is triggered at the same time that newly created isotopes from the supernova are injected into the collapsing cloud."

Short-lived radioactive isotopes -- versions of elements with the same number of protons, but a different number of neutrons -- found in very old meteorites decay on time scales of millions of years and turn into different (so-called daughter) elements. Finding the daughter elements in primitive meteorites implies that the parent short-lived radioisotopes must have been created only a million or so years before the meteorites themselves were formed.

"One of these parent isotopes, iron-60, can be made in significant amounts only in the potent nuclear furnaces of massive or evolved stars," Boss today. "Iron-60 decays into nickel-60, and nickel-60 has been found in primitive meteorites. So we've known where and when the parent isotope was made, but not how it got here."

Previous models by Boss and former DTM Fellow Prudence Foster showed that the isotopes could be deposited into a pre-solar cloud if a shock wave from a supernova explosion slowed to 6 to 25 miles per second and the wave and cloud had a constant temperature of -440 degrees Fahrenheit (10 K).

"Those models didn't work if the material was heated by compression and cooled by radiation, and this conundrum has left serious doubts in the community about whether a supernova shock started these events over four billion years ago or not," said Harri Vanhala, who found the negative result in his Ph.D. thesis work at the Harvard-Smithsonian Center for Astrophysics in 1997.

In several runs on the computer, the shock front was made to strike a pre-solar cloud of material with the mass of our sun, consisting of dust, water, carbon monoxide, and molecular hydrogen, reaching temperatures as high as 1,340°F (1000 K). In the absence of cooling, the cloud could not collapse.

However, with a newly crafted theoretically plausible cooling law, the researchers found that after 100,000 years the pre-solar cloud was 1,000 times denser than before, and that heat from the shock front was rapidly lost, resulting in only a thin layer with temperatures close to 1,340 degrees F (1000 K). After 160,000 years, the cloud center had collapsed to become a million times denser, forming the protosun. The researchers found that isotopes from the shock front were mixed into the protosun in a manner consistent with their origin in a supernova.

Other studies have suggested that our sun may have been born in a very crowded environment, near massive stars, but has since drifted to its relatively lonely position in space.

"This is the first time a detailed model for a supernova triggering the formation of our solar system has been shown to work," said Boss. "We started with a Little Bang 9 billion years after the Big Bang."

The results are detailed in the Oct. 20 issue of the Astrophysical Journal. Boss has previously shown that all this violent activity might have contributed to the makeup of our planetary system, too.