Thursday, 20 September 2012

Militarization of Neuroscience?

US military tech hothouse DARPA have an exciting announcement:
Tag Team Threat-recognition Technology Incorporates Mind, Machine
DARPA links human brainwaves, improved sensors, cognitive algorithms to improve target detection...
In what is - to my knowledge - the first example of the direct militarization of neuroscience, DARPA have developed a system in which electrical responses in a human brain are an integral step.

A soldier watches a screen on which, via various cameras, possible battlefield "threats" are shown. The cameras are fancy, and fancy image-recognition algorithms prioritize images that resemble threats - stuff that looks a bit like a tank, an IEDs, etc. But that's just the set-up.


The neuroscience core is that rather than just having a guy watching this screen and pressing a button if he spots something, they have a guy wired up with EEG to record brain activity. The system registers a threat when a picture causes a P300 response.

Now, the P300 is an electrical wave triggered by stimuli that are somehow 'meaningful' to the individual person. If you ask someone to press a button whenever they see a red light, for example, and then show them various lights, red ones will elicit a P300.

Very clever. But it may be too clever for its own good.

We already have a system that can detect the P300. It's the brain. No, most of us don't think of it as in those terms - we think of it as "Oh!" or "WTF?" or "Button press time" - but that response is the P300 (or rather something that precedes it because the P300 takes 300 milliseconds to peak, but you can respond faster than that.)

So why the EEG?

You could program a computer to detect P300s in a guy's brain and set off an alarm. DARPA apparently have. But it would be easier and cheaper to just 'program' the guy's brain to detect the P300 and push an alarm button - by asking him to do that. The human brain is a supercomputer that's been in development for hundreds of millions of years and it's primary job is to detect threats and act on them as quickly as possibly. One day technology might be able to do better but I don't think we're there yet.

DARPA say:
In testing of the full CT2WS kit, the sensor and cognitive algorithms returned 810 false alarms per hour. When a human wearing the EEG cap was introduced, the number of false alarms dropped to only five per hour, out of a total of 2,304 target events per hour, and a 91% percent successful target recognition rate.
All that tells us is that having a human check the pics via EEG is better than having no human involved at all. That's fine, but would a human just checking the pics via a button, be even better? We're not told. Maybe DARPA ran those tests and it really does offer advantages, but off the top of my head I can't think of any, and it wouldn't be the first time that the allure of high-tech neuroscience has blinded smart people to the fact that there's an easier, less sexy solution.

Unless...

OK. This is going to make me sound like a conspiracy nut. But there's one scenario in which the P300 has a decided advantage: unlike a button press, it's involuntary. It would work even if the guy doesn't want to co-operate.

So suppose you've captured a terrorist and you want to know who his terrorist friends are or where they've put the bomb. But he's not talking and Samuel L Jackson is off sick. So you wire him up to this system and show him a bunch of pictures of all the possible suspects or targets on your database. His brain will respond with a P300 to the ones he recognizes.

That would probably work - sometimes - and the P300 is already being trialled in some legal contexts for just that purpose although it's not clear how reliable it is.

So it's just possible that this whole soldier-scanning-the-battlefield story is merely an elaborate front (and perhaps a useful source of crucial calibration data) for a device to allow the CIA to read minds. I warned you it would make me sound crazy. Quick! Pass the tinfoil hat...!

22 comments:

Unknown said...

I believe they use it to identify the response even before it registers in the conscious (e.g., showing the images for < 100 ms). This way it would be faster than asking the humans to respond.

areanimator said...

Direct militarization of neuroscience isn't as new as you think. The excellent book "Mind Wars" by Jonathan D. Moreno details a lot of similar applications and has an extensive chapter on the ethics of military applications of neuroscience. Highly recommended.

Neuroskeptic said...

areanimator: Ooh thanks, I will take a look.

Gurumurthy Swaminathan: Mmm. Maybe. But I bet you could train someone to consciously detect threat pics just as quickly. Making subconscious responses conscious (and letting the subconscious handle previously conscious processes) - isn't that the point of training?

The advantage of the EEG could be that it cuts out the time required to move a finger and press a button, I suppose, but I'd want to see evidence that it's better than a user friendly interface + training scheme.

j said...

That's just a regular, boring-old P3-speller BCI like what you could rig up at home for under 100 bucks. I wonder how much money they spent on it.

Now, once they start using intracranial electrodes, they might see some significant reaction time improvements (on the order of 50-100ms).

There's been some EEG research on what's going on in a soldier's brain while shooting:
http://www.ncbi.nlm.nih.gov/pubmed/17547316
http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA433487

Jan Moren said...

I suspect it's less to do about speed and more about fatigue. it's well known that monitoring jobs such as watching security cameras, sonar or anything like that can only be done in short intervals. We quickly get bored and let our mind wander. Once that happens we start to miss even quite obvious things.

This might be a way around that. The EEG signal might be reliably triggered even when the conscious mind is busy thinking about food, sex, sleep or any combination thereof.

Anonymous said...

Neuroscience, Ethics, and National Security: The State of the Art
by Michael N. Tennison, Jonathan D. Moreno

http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1001289

DARPA being DARPA would apply any knowledge to get an edge.

"The Defense Advanced Research Projects Agency (DARPA) was established in 1958 to prevent strategic surprise from negatively impacting U.S. national security and create strategic surprise for U.S. adversaries by maintaining the technological superiority of the U.S. military."
http://www.darpa.mil/our_work/

j said...

Jan Moren: if you're too tired to notice the stimulus, you won't show much of a P300 either. Since its discovery, it's been known as THE "endogenous", subjective component (Sutton et al 1965), see for a review of what the P300 can offer you http://www.ncbi.nlm.nih.gov/pubmed/16060800

However, some research has tried to use the EEG to diagnose periods of inattentiveness, for example http://sccn.ucsd.edu/~scott/pdf/CanExp00.pdf
I can't comment on how reliable that would be.

Neuroskeptic said...

j: Interesting.

Also, one can see scenarios in which the P300 was actually less useful than a deliberate action. For example, if the operator had got used to seeing lots of threats, or threats in predictable locations, he might come to expect a threat, and then he'd get a P300 to the absence of the expected threat.

j said...

Neuroskeptic, I don't think that would affect the P3 (unless you'd actually be looking at a constant stream of violence) ... You get a P3 to total surprisal (by an event you can't just ignore) as well as to something you've been waiting for (like the target in an Oddball task, where you're 100% sure of WHAT item you have to react to with a button press, but don't know WHEN it will appear).
P3 isn't truly about surprisal or expectation, it's about orienting to something with personal significance.

You should read the Nieuwenhuis et al paper. Or come to my talk tomorrow :)

Vince said...

A comment on the neuroscience and the applicability.

In terms of the neuroscience, it would appear based on the physical constraints that a reasonably advanced BCI will outperform a human at recognition tasks, even using that human's own sensory systems oddly enough.

The latency involved between the lower-level processing of sensory streams moving to high-level 'conscious' integration is larger than the P300 signal we're recording.

But that's irrelevant. Volitional motor activity isn't causally instigated serially following the 'conscious' pipeline (which includes P300), rather appears to start previously via the unconscious Bereitschaftspotential. Recall Benjamin Libet's work. So, in this case, the information we desire is available for the BCI's use around 300ms prior to a human's conscious perception of it.


In terms of utility, DARPA's basic research could surely be evolved for the black-world reasons you state. They can also be used for more open applications; I can see a medium-term future in which a pilots off-axis helmet mounted displays are augmented by such technology or other such apps.

j said...

Vincent,
"Volitional motor activity isn't causally instigated serially following the 'conscious' pipeline (which includes P300), rather appears to start previously via the unconscious Bereitschaftspotential. Recall Benjamin Libet's work. So, in this case, the information we desire is available for the BCI's use around 300ms prior to a human's conscious perception of it."
A P3 can appear 200ms after a stimulus (see e.g. Makeig et al:1999), though I'm not sure how fast current BCI systems are at recognizing it.

The Bereitschaftspotential simply reflect your intention to move somewhere within the next few seconds. It's a simple continuous negative shift that doesn't by itself tell us anything about the timing of the actual event we're interested in. Libet was measuring internally generated effects (subject's own movement intentions), you cannot use that method to measure external effects. Furthermore, Libet did not measure accurate timing by itself; rather, he measured when he was able to decide what kind of movement (e.g. left vs. right, which affects the lateralization of the CNV/readiness potential) would follow.

Though indeed, the P3 probably doesn't reflect stimulus evaluation, but the reaction to stimulus evaluation, which is why it's preceded by the N2 (which has also been used in a BCI system) and sometimes even earlier effects). The nice thing about the P3 is that it's easy to elicit and measure, because it's such a large effect.
If you went intracranial and were to directly wire up the brain stem, you could cut reaction time approximately in half I think (while greatly improving SNR), while still observing just the same class of stimulus.

j said...

Though as Neuroskeptic implied, I have no idea how this is an improvement over having them press a button at all. Wherever you put the wires. P300 effects co-occur, and are temporally nicely coupled to, motor responses anyways.

Anonymous said...

I'm passingly familiar with the research though it looks like the specific work described in the article is related but not exactly the work I'm familiar with. They are trying to use Rapid Serial Visual Presentation of huge databases of imagery and use the P300 response to select a subset of the rapidly presented images for further analysis - by knowing the average offset between stimulus presentation and the observation of the P300 response they can select the images that most likely caused the response and focus analysis on those images. In the proposed system, the soldier doesn't hit a button at all but just watches the RSVP for targets. This way the soldier doesn't have to hit a button, doesn't have to go back through the stream to find the target that triggered the response, etc. I've seen data reporting success for sorting targets from large non-military datasets that had highly salient targets.

Now, a concern that came up in discussion was that P300 isn't discriminatory. If you stick a picture of a soldier's spouse in the stream, you might get a P300 response. If the soldier is hungry and a McDonald's sign shows up, you might get a P300. The false alarm rate isn't being studied closely right now as they focus on just getting signals but I suspect it will be an interesting topic down the road.

Jake said...

From the linked article:

"The use of EEG-based human filtering significantly reduces the amount of false alarms. [...] the sensor and cognitive algorithms returned 810 false alarms per hour. When a human wearing the EEG cap was introduced, the number of false alarms dropped to only five per hour, out of a total of 2,304 target events per hour, and a 91 percent successful target recognition rate."

This is all fine, but it raises what I think is the obvious question: has stimulus discriminability actually been improved, or has the response criterion simply been changed? In other words, I wanted to see a signal detection analysis. If it turned out that the decreased false alarm rate was simply due to the use of a more conservative criterion, then it's really not clear what the big advantage is, as there are certainly behavioral methods for inducing people to change their criterion. The difference in hit rates is not discussed at all as far as I can see, so we can't really tell as is.

Dirk Steele said...

Does anyone, apart from the military, want soldiers to immediately and subconsciously 'jump to conclusions' before pulling the trigger, or pressing the RED button? Not me.. :-(

The Neurocritic said...

DARPA has funded work on "the militarization of neuroscience" for quite a while now. There was much discussion about it after Moreno's book, like The militarization of neuroscience and The Pentagon and Neuroscience (by Jonah Lehrer). There's also the 2011 report from the Royal Society on Neuroscience, conflict and security.

Neuroskeptic said...

Well I stand corrected on the "first example of the militarization of neuroscience", it is not. Thanks for everyone who pointed that out.

Mike said...

Neuroskeptic: "OK. This is going to make me sound like a conspiracy nut. But there's one scenario in which the P300 has a decided advantage: unlike a button press, it's involuntary. It would work even if the guy doesn't want to co-operate."

I don't think your 'conspiracy theory' is particularly unreasonable and it could be answer, but I would have suspected that there was also a significant advantage in the fact that it cuts out the link between the identification of a target and then reporting the identification of a target.

As we know from research looking at verbal reports, there are different contingencies controlling and affecting the identification of something and the reporting of that same thing. In the same way that there is a distinction between performing an action and explaining how you're doing it - when we ask golfers to explain the steps they take to hit a ball, we see their performance decrease.

In other words, although I don't know of any research off-hand to support this, it could certainly be the case that removing the extra step of having a person report the event could result in an increase of accuracy. For example, when reporting the stimulus we have to deal with factors like self doubt, where they might see a target but rationalise it away, whereas the EEG measures wouldn't be affected by subsequent rationalisations.

Of course, there would also be false positives with the EEG measurements, as one of the commenters mention with regards to seeing something like their spouse on the battlefield. This would elicit a response but not one that is relevant to identifying threats.

Neuroskeptic said...

MikeSamsa: Mmm, interesting idea that the P300 might offer a window into the 'raw' perception of threat, unmediated by conscious processes. However I do wonder whether the proper training could achieve the same thing. It seems to me that learning to properly interpret (and verbalize) ones own unconscious responses is one of the main goals of training in cases like this (learning 'when to trust your nose...')

Dirk Steele said...

Sorry for the derail. Thomas Szasz has died. There is no one who can take his place. He deserves a blog comment.

Jonathan Swift; "When a true genius appears in this world, you may know him by this sign, that the dunces are all in confederacy against him."

Mike said...

Neuroskeptic: "However I do wonder whether the proper training could achieve the same thing. It seems to me that learning to properly interpret (and verbalize) ones own unconscious responses is one of the main goals of training in cases like this (learning 'when to trust your nose...')"

Yeah, it would definitely be possible to significantly improve accuracy and get it to near-perfect using the correct techniques. The problem, in terms of military application, would probably be two-fold: 1) the correct techniques would be very time-consuming, requiring constant re-training and could get quite costly, and 2) they are probably unlikely to use correct techniques (as things like "learning" often aren't viewed as scientific fields that have valid contributions to these things). I suppose the cost issues in #1 there would be relevant to the technology and tools they're using in the EEG method as well though.

In terms of the task, which is just a signal detection task, the problem would be that false positives are always going to occur (as confusion over discriminable stimuli is an inherent part of our learning mechanisms), and I just can't see a way to overcome the compounding effect that verbal reporting would bring to the issue. One of the main problems would be that of 'behavioral drift', where the understanding of the criteria that governs their observations will be subtly altered by their ongoing experiences

That's just mostly speculation though. I can see some reasons why their approach may be preferable but I'm sure there will be drawbacks and problems with their methods that I'm just not aware of as brain processes isn't my field.

Graham Healy said...

The P300 is not binary, it has an amplitude that reflects in say the case of an oddball task the local and global probability of a target stimulus. The P300 has also been shown to be comprised of a number of sub-components like the P3a and P3b that are known to be sensitive to stimulus type and expectation. I think when annotating a large image data-set, information like this can provide not only a confidence measure for each image each but may allow for further images categories to be discerned.

Coupling this with measures of attentiveness, button presses where effects like error-related negativities can indicate erroneous responses and considering that these systems typically rely on examining time regions where a range of ERPs are known occur, they may well offer a system that is better equipped to enable a user to filter images without the classical restraint of them manually clicking a button as their own pace.


Also, I don't think you sound crazy Neuroskeptic, it's probably just a matter of time before tech like this matures that the nooks and crannies of how it works can best can be realised across a wider and perhaps more sinister range of applications.

Two publications showing advantage of including button presses in scenarios like these:
http://liinc.bme.columbia.edu/publications/triage_ieee.pdf
http://doras.dcu.ie/16387/1/graham-paper-HCI.pdf