Gray Matters: Limits of current neural technologies

0

Imagine, for a moment, a man. While playing a pickup game of soccer with a cast of his most amiable neighbors and peers, he falls and hits his head on the curb.

After a chorus of gasps and hollers, someone drives him to the hospital.

From there, he’s delivered to a neurologist who seems far too eager to put him in a long, white tube.

This tube is called a Magnetic Resonance Imaging scanner (MRI), and it is a common tool used by neurologists to take an image of someone’s brain to determine a trauma diagnosis.

The tube surrounds the body with a powerful magnetic field that forces protons to align with it. Then, a short burst of radio waves are sent to the location to be imaged (which can be any part of the body) and the protons are knocked out of alignment.

In the process of the protons returning to alignment with the magnetic field, they release radio signals that are picked up by the scanner to create an image.

The neurologist is given a series of images that represent the body’s brain. With this, the neurologist can identify the most injured region of the brain and determine a course of action.

Great, what more could we ask for from these tools? It helped identify the problem and will help to solve the problem. 

Eh, not really. At least, not half as well as it could.

Many people would probably consider MRI to be a sophisticated, state-of-the-art method of diagnosis.

In my last column piece, I stated that the first modern neuroscience lab was founded in 1964. With this in mind, and considering that MRI was first conducted on a human in 1977, and worryingly acknowledging that that was 45 years ago out of only 60 years modern neuroscience has been around, one can begin to conceptualize MRI as relatively crude.

Fundamentally, MRI is a noninvasive way to scan the brain. In neuroscience, a noninvasive measure is one that involves neither a puncture or incision of the skin, nor the insertion of an instrument into the body. 

For most parts of the body, scientists and practitioners of medicine have absolutely no qualms using invasive techniques. Surgery is invasive. So is the use of a pacemaker, and so will the microchip Jeff Bezos slides under our skin while we’re sleeping.

But, for matters of the human brain, we remain very, very cautious about using invasive techniques because of the horrible and irreversible damage that may be done as a consequence. And so, we often stay outside of the human brain.

Electroencephalography (EEG) involves placing electrodes on someone’s head to record electrical activity. There are activity differences between people who are solving problems, concentrating, awake, drowsy or asleep. 

EEG gives us some sense of what is happening and exactly when it is happening, but it is terrible at telling us where it is happening. So other techniques may be employed for better spatial resolution.

Functional magnetic resonance imaging (fMRI) measures change in blood flow in certain areas of our brain. Paired with an MRI scan, we may get a 3D view of the brain and its activity.

Computed tomography (CT) scans are created by exposing the brain to X-rays — not much more to know about that one.

In a positron emission tomography (PET) scan, people are made to consume radioactive tracers that can find and show cancers, heart disease and brain disorders. This is one of the few cases where drinking radiation can be life-saving, but, even then, neurologists limit the number of PET scans people have.

The future of neuroscience may lie in developing safe and effective technologies, by which we can scan the brain’s activity down to the activity of a single neuron. This is far more complex than the general blobs we see over certain areas of the brain when developing scans with current technologies.

It will be a while before we can carefully and specifically record neural activity, but, until then, researchers are steadily making improvements to the existing scans to minimize their flaws.

For example, since a PET scan involves consuming radioactive isotopes, a development team at University College London devised a safer and more sustainable alternative.

Instead of consuming radioisotopes, people ingest glucose that has been magnetically labeled with bursts of radio waves so it can be detected by an MRI machine.

Other researchers are trying to use digital models of the brain to accurately simulate how it responds to trauma. 

Scientists at Los Alamos National Laboratory are developing a computer modeling software that takes into account a patient’s specific brain anatomy. They call it the “digital head.”

Information from the simulation of damage could help treat an injured patient and provide information as to how the brain responds to trauma.

Maybe in the future, there will exist a safe and widespread neuroimaging technology that is able to specifically record neural function in real time.

This would provide invaluable insight into how the brain works, what it is doing and when it is doing it. Current technologies all falter at answering at least one of those questions.

Comment policy


Comments posted to The Brown and White website are reviewed by a moderator before being approved. Incendiary speech or harassing language, including comments targeted at individuals, may be deemed unacceptable and not published. Spam and other soliciting will also be declined.

The Brown and White also reserves the right to not publish entirely anonymous comments.

Leave A Reply