I ran across an interesting military report from the 80s, that reviews research into biofeedback to evaluate its effectiveness as a performance enhancement tool.
What was interesting was the specific outcomes the military was interested in: (1) biofeedback as general anti-stress training, and (2) biofeedback matched to a specific performance or behavior.
The first refers to the use of biofeedback training to increase the soldier's ability to consciously reduce symptoms of stress in the field, especially those symptoms that would impact performance. The second refers to specific activities such as marksmanship.
The feedback methods used were heartrate, various frequencies of EEG, and EMG, among a few others.
However, the report is sanguine in the benefits of biofeedback training. For anti-stress training in particular, it found that any results from the training were tough to replicate in actual stress conditions;"...
Further, positive laboratory results tended to disappear when tested under conditions more similar to operational settings and when more operationally relevant tasks were employed. "
The 70's and 80's enjoyed a great deal of biofeedback hype, so the sanguine tone of a report written with a pragmatic skeptic.
, maker of continuous glucose monitors, recently filed this patent
for a continuous monitor that syncs with a smart phone.
It has those pretty graphs one would expect, but the most interesting feature is the notifications. The left graphic gives clear examples of how interpretation of continuous biomarker data can be used to motivate specific actions by the user.
The patent also discusses how the data could be used with other smart phone capabilities, like activity tracking, GPS, and food logging.
It also suggests it could notify your doctor, with or without the patients' prior confirmation, when blood sugar was at a dangerous level -- useful but eyebr
There is a promising trend in glucose monitors becoming more smartphone friendly. Several services connect smart phones to glucose-monitoring devices, while some monitors sync with laptops and smart devices automatically. Examples include the ditto System
These target diabetes populations of course. This writer is interested molecular biomarkers for health tracking in the general healthy population. Challenging regulatory and business issues aside, the technology is moving in that direction.
Listen up, maggots. You are not special. You are not a beautiful or unique snowflake. You're the same decaying organic matter as everything else.
That is how Tyler Durden might have responded to proclamations about how quantified self-style self-experimentation and bio-hacking is about "N=me".
The "N=me" idea is about self-empowerment. It says "I want what works for me, not what works for most". It is a refusal to surrender one's wellness sovereignty to "experts". It is a bold statement that "I will not be regressed to the mean
Hear, hear! I definitely do not agree with Tyler. People should indeed be empowered take charge of their health and wellness.
BUT... N = me is small potatoes and we can do better.
Where "N = Me" comes from
N refers to sample size, as in scientific experiments where statistical calculations determine what sample size is needed to have enough statistical power to reject a null hypothesis -- say for example that some drug has no effect on some disease.
"Drug" is the key word here --"N=me" is really talking about the paradigm of randomly controlled trials (where sample size really matters), which is the standard for pharmaceutical clinical trials. Pharmaceuticals dominate our entire thinking about health and medicine.
Put simply, this very high standard of scientific rigor is what health professionals expect from all things medical and health-related. Since your personally recorded health data can in no way reach this standard of rigor, they are inclined to dismiss it, or express outright hostility, or even restrict it for fear that you might do something crazy like lobbing off a boob.
So the "N = Me" mantra is our reaction to the hostility of the health establishment to technologies that empower people to manage their own health.
Why it is detrimental thinking
The idea that our health and wellness needs are completely unique is a huge opportunity for bullshitery.
Here is a litmus test for QS and biohack snake-oil salesmen -- "everybody is unique, so their experience/numbers are different". I have heard this a lot from the biohacker crowd especially. The fact is biologically we are more similar than we are different -- even the apes have more biodiversity than us. So the rule-of-thumb is if they can't find a common signal across all people, it is nothing but an overpriced mood ring.
Why we can do better
Randomly controlled trials are great when they can be done -- but it is ridiculous to act as if there were no other way to be rigorous and objective about data. As any economist on Wall Street or data scientist in Silicon Valley will tell you, data does not have to come from a designed FDA approved scientific experiment to be useful or treated with mathematical rigor.
The main advantage we gain from combining our data is recommendations. For example, PatientsLikeMe uses a collaborative filtering algorithm to suggest patients with experiences that may be relevant to your heath situation(1). Or, consider the dietary or exercise program that you are tracking, would it not be useful to get recommendations for how to improve the regime based on the experience of like-minded people?
- Swan, Melanie. "Emerging patient-driven health care models: an examination of health social networks, consumer personalized medicine and quantified self-tracking." International journal of environmental research and public health 6.2 (2009): 492-525.
Mashable comments in Fitness Trackers Are Useless Without Real-Time, Personalized Analysis
The services behind these trackers need to invest in immediacy by providing useful information, ideally in real time, so we can optimize our wealth of data into action.
This assertion falls short -- it is not useful to track something in real time if we cannot change it instantly through some intervention.
Heart-rate data for example, is actionable in real time, if for example your are doing HRV biofeedback training or targeting a certain heart rate during strenuous exercise. Obviously, other streams of data that fit directly to an ongoing activity that one wishes to optimize, like a workout, also fit this mold.
Weight is a simple example of something that quantified-selfers track, yet doesn't have a "real-time" immediacy -- it is not useful to check your weight second-by-second because you can't no practical intervention can change it in an instant. This is a simple example of the many things that we are interested in tracking but that we do not need to track in real-time.
A dispatch from the
Digital-Life-Design conference in Munich, courtesy of
Arik Hesseldahl at Re/code.net
, who asks "Have wearables gone wild?":...That was my question for Hosain Rahman, the CEO of Jawbone, maker of the Up activity monitoring bracelets. In a speech at the DLD conference today, he said the company has gathered a huge mass of data from millions of Up users and are beginning to see some patterns that are, on its face, interesting. But useful? Who knows?
Here’s one nugget: Jawbone has collected data on more than 160 millennia worth of sleeping patterns of its users. (By my math, that’s about 1.4 billion hours; Jawbone doesn’t disclose how many users it has.) That’s enough data to show very clearly, he said, that women on average tend to sleep 20 minutes longer per day than men. He called it “the world’s largest sleep study.”
Great! And if you’re not a sleep specialist, so what?
The "so what?" Arik is talking about more specifically means "What's the point of having all that data if it doesn't make for a better product?"
A quantified self device by definition is providing data about the user to the user. The question brought about by big data is not "what can we learn when we pool data across users?", rather it is, "how do we pool data across users in a way that matters?"
There is a difference between seeing interesting patterns in data across users, and harnessing data across users to help individual users make decisions. Netflix doesn't use its data to make statements like "Women watch TV series on Tuesday more often than men, whadaya know?" Rather, it uses the data to make recommendations to its users on what to watch. It makes the product more valuable, and in term, makes them more money.
Most quantified self devices fall short on this, with the exception of occasionally comparing a user to the "average".
Have Wearables Gone Wild? Questioning the Quantified Life
I dream of a SDK for my wetware -- the Nirvana of quantified self and biohacking wherein I can build apps based on timely data about the internal workings of my body.
Such apps won't be easy to make -- the challenge will be in signal processing, as well as finding features in the data that actually relate to outcomes people care about, like weight loss or stress management. These will be great problems that will require great effort and great art, and I'd love to get started.
But before I or anyone else can do that there are some problems that have to be solved first
. The first is the need for well designed hardware that makes something like glucose-tracking not only completely painless and completely mess-less, but completely passive. The second is the need to resolve the large mismatch between the cost of bring new biomedical devices to market, and the relatively low pricing typical of mass market consumer device marketing.
So I am excited about
recent actions from Google and Apple. Namely, Google has announced a contact lens that tracks glucose, and Apple is building its iWatch team, having made hires
that suggest it will contain sensors that track your body's internal chemistry.
The aforementioned problems are the kinds of things those Bay Area titans are very well equipped to solve. So I am very excited about where these developments could lead.
Quantified self -style activity trackers are a rare example of consumers paying for prediction.
Consider the consumer products available that are based on prediction. It is hard to find examples where consumers actually PAY for a prediction. Let’s knock out a few candidates:
Getting paid, getting made, and getting laid
- Recommendation engines on e-commerce sites — the recommendation persuades them to buy more things, but it is the things they are interesting in buying, not the recommendation.
- Ad-servers (eg. Google, Facebook) — you ARE the product
- Social network new friend suggesters — who the heck pays for that?
- Services that rely on computationally optimized logistics (>>car service, UPS, United Airlines) — people don’t know and don’t care what’s under the hood.
- Retail services that rely on predictive analytics — if people did know what’s under the hood, it would probably piss them off.
Bill Bishop, author of the wonderful Sinocism China Newsletter
, once gave me a piece of advice about selling consumer products: “It has to help them get paid, get laid, or get made.”
In other words, to convince people to buy a consumer product or service, you to prove to them it will help them make money, become more attractive, or achieve some social objective. It is not a fixed law*, but it is a damn fine rule of thumb, especially for startups. So when do people buy predictions
to serve these ends? Examples:
- Getting paid: People will pay for predictions on what stocks to buy. (eg. a paid newsletter)
- Getting made: People will pay for predictions that will help them outperform their friends in fantasy sports.
- Getting laid? — activity trackers
Activity tracking apps use machine learning to predict calories consumed and burnt, steps walked, quality sleep slept, etc. They guess from our motion what we are doing, such as what exercise we are engaged in, or whether or not we are keeping good posture.
They certainly fall in the “get laid” category — in that (let’s face it) our fitness and wellness objectives are often at least partly about wanting to look good.
So until Google Glass comes out with augmented reality that predicts who will go home with you in a singles bar, Quantified Self is the rare “get laid” example for buying a prediction.*Main exceptions are non-discretionary spending, spending on offspring, checkout-aisle style impulse spending, and spending on entertainment.
The principal objection to 23andme boils down to the idea that people can not being able to be trusted with interpreting their own personal genome test results. Nay-sayers imagine scores of women with a mutation on BRCA1 lining up to have their breasts lopped off.
The belief of the medical establishment is that medical practitioners are the only ones that can be trusted with interpretation. Indeed, the very fact that it is difficult to have medical professionals turn over one's own medical records indicates how much informational authority a doctor has over a patient.
But increasingly it seems that savvy patients, particular those who are savvy on the science or on the quantitative side, are challenging that authority. I know of several quant friends who have stumped their doctors by tracking and interpreting their own data. Doctors often reject the analysis outright, like creationists rejecting evidence for evolution.
In a beginning statistics class, one might learn about Bayes rule through a clinical diagnostics example. The linked example explains it well, but basically it shows how clinical tests have error rates, and how the lack of precision and accuracy affect the probability that the subject has the disease. Interpreting this uncertainty is not unlike interpreting the uncertainty inherent in a 23andme report.
I have never met a doctor who can explain to me the simple Bayes example. This is not a big deal, I know many published scientists who can't explain a p-value. But I don't automatically default to those scientists judgements in interpreting p-values. Why should I do so with a doctor?
I am in the process of researching the uses of EEG to the quantified self and biohacking communities, specifically in the domain of biofeedback training. When I initially looked at research on EEG biofeedback, I realized I faced the Augean task of separating solid findings from pseudoscience and outright mysticism.
So I changed tack, and started looking at machine learning approaches to classifying EEG signals. The goal of classification algorithms is to make a guess about what the brain is doing.
The scope of the classification depends on the application. Further, machine learning papers tend not to get published unless they can show decent performance. Therefore, I supposed this research would give a good idea of what kinds of things we can do with EEG data.
I found much of the work is done on classifying motor imagery, i.e. where the subject imagines movements. The application of this is brain-computer interfaces (BCI) for gaming or for prosthetic devices for the disabled. Other common classification tasks include classifying sleep staging, emotions, and whether or not the subject is having a seizure.
Meditation and ADHD
If you are interested in fitness and wellness, you are likely interested in using neurofeedback for meditation. Perhaps ADHD as well, because ADHD from a simplistic view seems like the opposite of the mental traits one wishes to train with mindfulness meditation. There has been some work done in classifying people with ADHD from people who do not have it, with hopes of developing diagnostic tests.