Pass up the hard drive, it’s time for a DNA drive

We live in an age of information overload. A recent estimate of the amount of digital data in the world pegged this number at a whopping 1.8 zettabytes- that’s 1.8 x 10^21 bytes, or 1800 billion gigabytes of data. Indisputably, a lot of this information is extremely important, such as hospital files on patients’ clinical histories, or data on poverty levels across the globe, and needs to be accessed frequently. However, I suspect that a further part of this data is a little less important, and needs to be accessed much less often- case in point (although the number of views of this video suggest that my conception of the importance of this data is quite misconstrued). Currently, the most popular and cost-effective means for long-term storage of all this data is to transfer it onto magnetic tape and store the tape in repositories. However, because of the nature of magnetic tape and the frequent change in media formats, this data needs to be rewritten from time to time to prevent its loss, a rather expensive exercise. An experiment performed by a handful of scientists from the European Bioinformatics Institute in England, led by Nick Goldman and Ewan Birney, and published recently in Nature suggests that to solve this rather costly data archiving problem, we may just have to look within ourselves. Where plastic fails, DNA may succeed!

The authors of the study played with the idea of using DNA as a medium for long-term data storage. This idea itself has been around for a while, first having been proposed in 1995. Indeed, Harvard synthetic biologist George Church and his colleagues even tried it out- the results of their experiment were published in Science last year. Although novel, Church’s team’s approach was error-prone. Using a more sophisticated approach Goldman and team encoded ~750 kilobytes of digital data in 18 megabases of DNA. For their proof-of-principle experiment, the scientists chose 4 different types of digital files to demonstrate the utility of their approach- an ASCII text file with all of Shakespeare’s 154 sonnets, a fitting pdf file of Watson and Crick’s classic paper on the double helix structure of DNA, an mp3 audio file with an excerpt of Martin Luther King’s “I have a dream speech”, and a jpeg picture file of the institute where the work was done. Using a code to convert binary data into the As, Gs, Cs, and Ts of the DNA code, they encoded these 4 computer files as long DNA sequences. The novelty of their approach is that they managed to encode the data into DNA nucleotides by avoiding runs of the same nucleotides (AAAA, CCCC), which is what made Church’s approach error prone. They then synthesized this DNA in short overlapping fragments in the lab, flew it half the way around the world, and sequenced it to try and recover the original data. And they succeeded, with 100% accuracy. In the process, they used only 10% of the synthetic DNA sample they synthesized to be able to decode the data within, leaving plenty more to repeat the experiment several times.

The authors then went a step further in trying to estimate the costs of their DNA-based storage approach for larger amounts of data and comparing it to current costs with magnetic tape. At their current estimate of ~$12,620 of writing and reading costs per megabyte of data, magnetic tape is clearly more cost-effective, with DNA-based storage breaking even only after 600-5000 years of repeated tape rewriting. However, with the rapid advancement of DNA synthesis and sequencing technologies, and the associated drop in costs, the authors predict that within a decade DNA-based long-term data storage may become a viable option for data stored for at least 50 years.

If you think about it, DNA makes a pretty attractive medium for data storage- that is afterall it’s innate function, and it’s been doing so for well over a billion years. Considering that 6 gigabases or 6.6 picograms (a picogram is 1 millionth of a microgram) of DNA is all it theoretically takes to encode one human being, it’s a pretty dense storage medium. It’s also no-fuss- it is stable even in extreme climatic conditions- just take the case of the DNA sequencing projects of the woolly mammoth and the Neanderthal. So in another few decades, if DNA synthesis and sequencing costs plummet as expected, libraries may be hiring geneticists for their archiving needs!

Advertisements

Leave a comment

Filed under Genetics, Uncategorized

Humour + Science = The winning combination

It’s Nobel Prize season again! The announcement of the 2012 winners of the most coveted of scientific prizes began Monday with the disclosure of the names of the winners of the prize in medicine and physiology, which was awarded jointly to John Gurdon from the UK and Shinya Yamanaka from Japan for their revolutionary work on “cellular reprogramming”. Three weeks earlier, a similar announcement transpired, albeit for an arguably less coveted scientific honour- the Ig Nobel Prize! Whereas the Nobel Prizes need no introduction, the Ig Nobel Prizes perhaps warrant some explanation. Organized by the “Annals of Improbable Research”, the Ig Nobel Prize, in sharp contrast to its more distinguished cousin, awards seemingly absurd scientific research. Rather than highlight the revolutionary discoveries that improve human life, the Ig Nobel Prize chooses to recognize scientific research that makes you laugh before it challenges you to think. Who ever said scientists don’t have a sense of humour!

To give you an idea of the kind of research that impresses the Ig Nobel committee, let’s take a look at the winner of the 2012 prize in Neuroscience. The lucky (now that’s debatable!) laureates were Craig Bennett, Abigail Baird, Michael Miller, and George Wolford for demonstrating that brain researchers, with the aid of fancy machines and not-so-fancy statistics, can find brain signals anywhere, even in a dead salmon! Published in the “Journal of Serendipitous and Unexpected Results” (not exactly words one would associate with the rigorous scientific process), the paper warns of the dangers of using unsuitable statistical methods to analyse data from functional magnetic resonance imaging (fMRI), a method that is widely used by neuroscientists today. The authors claim that to deal with the plethora of data that one obtains from fMRI, it is essential to use statistical tests that reduce the probability of finding a false signal, what in technical jargon is called “correcting for multiple testing”. Ignoring to do so as is done by a large number of researchers, they portend, could have grave implications for the interpretability of data. To prove their point, the authors placed a dead Atlantic salmon in a magnetic resonance scanner. They presented their subject with a challenging mental task- to determine the emotional state of human individuals (happy, sad, etc.) by looking at photographs of them in different social situations. The researchers then took multiple brain scans of the salmon while he was applying himself to the task at hand, and then analysed the images for regions of brain activity related to the task. They then used three different statistical approaches to analyse their images. The simplest and most widely used statistic, which doesn’t correct for multiple testing, led the researchers to find two clusters of activity in the salmon’s brain! Expectedly, as soon as they applied two other statistical methods that limit the chances of detecting a false positive signal, the clusters vanished. So, although taking brain scans of a dead fish sounds at first like a rather absurd experiment carried out by jaded graduate students stuck in their lab at midnight desperate to amuse themselves, it does serve the noble purpose of educating scientists on proper statistical analyses. And voilà, Ig Nobel worthy research!

If you are, however, still unconvinced that humourous science is worthy of recognition, and prefer to be informed about more serious scientific matters, here is an article that I co-wrote for The New England Journal of Medicine, while in graduate school, describing the seminal work of Shinya Yamanaka, one of the laureates of this year’s more serious scientific award. In the five years since Dr. Yamanaka’s groundbreaking research was published, the field of stem cell biology and regenerative medicine has been reinvigorated to achieve the now seemingly more realizable goal of autologous stem cell therapy. And that is truly Nobel worthy.

1 Comment

Filed under Biology, Neuroscience

Reason versus Religion

How religious are you? Do you devoutly believe in a God (or for that matter in multiple Gods- I’ve got your back Hindus!)? Do you vehemently deny the existence of God? Or are you somewhere in between? People’s religious beliefs often change over time based on social context and life events among other things. But is there something more fundamental, something in a person’s cognitive process, that makes her more disposed to one end of the religious spectrum than the other?

It is this rather challenging question that psychologists Will Gervais and Ara Norenzayan of the University of British Columbia tried to begin answering in their recent article published in Science Magazine. And their findings, if not surprising, are most certainly bound to attract controversy. By primarily testing a group of Canadian undergraduates, the authors of the study found that a more analytical mind was less inclined to believe in religion or in supernatural agents such as God, angels, and the devil.

According to the established dual-process theory of human thinking, there exist two information processing systems used in mental reasoning. One system is more intuitive and is thought to foster religious belief. The other is more analytical and logical. When undergraduate students were faced with a thinking task that pitted the two systems against each other, those that preferentially used their analytical system (and hence completed the task successfully) were found to be less religious and have less belief in supernatural agents.

So can challenging a person to indulge in analytical thinking cause them to be less religious? The authors used several subtle tests to activate analytical thinking in their study participants. They exposed them to images of Rodin’s famous sculpture “The Thinker” (presumably to elicit thinking), had them respond to a survey about their religious beliefs in a font that was difficult to read and was shown to engage the analytical process, and by having them play a word game filled with suggestive words such as “analyze”, “think”, and “reason”. The participants exposed to each of these cues showed greater disbelief in God, angels, and the devil than participants who were exposed to placebo cues such as images of the Discobolus of Myron, an easy to read font, or a word game with neutral words such as “hammer”, “shoes”, and “jump”.

Based on these results, the authors suggest that analytical processing promotes religious disbelief. However, they are quick to caution that their study doesn’t imply that analytical thinking is the sole cause of religious disbelief. In fact, they suggest that it is only one of several other causes such as cultural context and deficits in the intuitive reasoning process. They also refrain from making any comments about the rationality of religious belief or the merit of analytical thinking and stress that their results need to be replicated in other populations that encompass the cultural, religious, and economic diversity of the world.

3 Comments

Filed under Psychology, Social Science

Think you know your blood type? Think again!

If you’ve ever had a blood test, chances are you know your blood type. A, B, AB, or O, Rh(+) or Rh(-), you may be thinking to yourself. However, in addition to the ABO and Rh blood systems, there exist 28 more blood systems, as recognized by the International Blood Transfusion Society, each of which can be attributed to a distinct gene. In fact, a complete blood type would tell you your blood type for each of these 30 systems. Although the ABO and Rh systems are the most common universally, knowing your blood type for the other less common blood systems is equally important to prevent adverse reactions upon blood transfusion or haemolytic anaemia of the foetus in cases of pregnancy- a situation where a woman carrying a child with a blood antigen that she lacks (for instance a B(-) mother carrying a B(+) child) can produce antibodies to her foetus, which in the most severe cases could lead to the death of her foetus.

Adding to the list of blood systems, 2 studies published in this month’s issue of Nature Genetics have discovered 2 new blood systems- Langereis (Lan) and Junior (Jr). Although the Lan and Jr blood antigens have been known for over 40 years, having first been identified in pregnant mothers carrying babies with incompatible blood types, the genes responsible for making these antigens have been unknown, impeding the ability to identify individuals’ Lan and Jr blood types. But an international group of researchers from France, Japan, Italy, and the US have identified the genes that encode these 2 systems, and their findings could have broad implications not only for transfusion medicine and obstetrics, but also for personalized medicine.

The group led by Carole Saison and Virginie Helias began their quest to identify the Lan and Jr genes by developing strong antibodies to these antigens. They isolated the Lan antibody from the blood cells of a Lan(-) Japanese woman who had naturally developed the antibody in response to her Lan(+) foetus. Developing the Jr antibody turned out to be trickier. Although the scientists had access to human blood cells that had the Jr antibody, similar to the case for the Lan antibody, the levels of the Jr antibody were too low to isolate. Serendipitously for the group, cat blood cells showed high levels of the Jr antigen, allowing them to isolate an antibody for Jr from cat blood. Antibodies in hand, the scientists were able to identify the antigens with which the antibodies cross-reacted using a technique called mass spectrometry, and as a result identified the genes that encoded these proteins. They found that the Lan system was encoded by the gene ABCB6, which is known to be involved in the proper functioning of blood cells. The Jr system was found to be encoded by the ABCG2 gene, which is involved in removing toxic substances from the body. To make sure they were right, the scientists screened the ABCB6 and ABCG2 genes in individuals who were Lan(-) and Jr(a-), respectively, and found mutations in both these genes that could explain the lack of the antigens in these individuals. Interestingly, in spite of these mutations, Lan(-) and Jr(a-) individuals did not show any obvious clinical symptoms indicative of disease.

The immediate implication of this work is clear. The antibodies developed by the scientists will prove invaluable in identifying people’s Lan and Jr blood types, which will in turn make blood transfusions and pregnancies safer. But the broader implication of this work lies in it’s potential impact on the burgeoning field of personalized genomics, which envisions customized drugs for individuals based on their genetic makeup. The ABCB6 and ABCG2 genes have been shown to be responsible for multidrug resistance in cancers, most-likely by controlling how efficiently drugs are absorbed by cancer cells and the body. Therefore, it will be interesting to see whether Lan(+) and Lan(-) individuals may respond to the same drug differently, making it a better fit for one group over the other. In their studies, the authors found that the Jr(a-) blood type, although rare world-wide, was more frequent in the Japanese and European Gypsy populations, making the detection of this blood type in these populations quite important. Now that an effective antibody has become available, it will also be interesting to see how frequent the Lan(-) and Jr(a-) blood types are in other populations around the world.

1 Comment

Filed under Biology, Genetics

The case for women’s quotas in government

Rehabilitating historically disadvantaged groups in society is a major focus of governments the world over. A common approach used to help underrepresented groups make progress is the quota system (called affirmative action in the United States), whereby a percentage of positions in government, universities, or even in corporate boardrooms is set aside for underrepresented groups. But how effective are quotas in uplifting the disadvantaged? And what is the mechanism by which quotas, if they work, can bring about real-life changes?

In their article published this week on ScienceExpress, economists Lori Beaman, Esther Duflo, Rohini Pande, and Petia Topalova show that quota systems do indeed effect real-life changes in society and that this effect may primarily be through a role model effect. They focus specifically on the role of reservations for women in government and the effect of such a reservation on closing the gender gap in a traditionally patriarchal society.

The authors of the study took advantage of a naturally randomized experiment in India, where a 1993 law reserved leadership positions for women in village councils randomly across the country. They compare the life aspirations and outcomes of adolescent boys and girls, and their parents, in villages where the reservation system was enforced and in those where it wasn’t.

They performed their study in the rural district of Birbhum in West Bengal, where they classified 495 village councils into those that had never been reserved, those that had a reservation for a female chief councilor or Pradhan for one term, and those that had a reservation for a female Pradhan for two terms. In each of these villages, they gauged the aspirations of adolescent boys and girls through interviews probing the children’s desired level of education, desired age of marriage, their preferred occupation at 25 years of age, and whether they wanted to become a Pradhan themselves. Through the same questions, they also gauged the aspirations of parents for their adolescent children.

In never-reserved villages, there was a clear gap between parents’ aspirations for their male and female children. Parents had higher aspirations for their sons, wanting them to study more, marry later, and have professional occupations. However, in twice-reserved villages the gender gap was significantly reduced, with parents more eager to see their daughters graduate from secondary school, choose their own professions rather than become housewives or do what their in-laws prefer, and even become a Pradhan. Interestingly, the gender gap was unaffected in once-reserved villages, suggesting that greater exposure to a female Pradhan helped reduce the gap. Similarly, the girls’ aspirations for themselves were also much higher in twice-reserved villages than in never-reserved villages. The girls were more likely to want to finish school and marry later. In fact, in twice-reserved villages, girls were more likely than boys to want a job with higher educational requirements. Furthermore, improved aspirations for girl children actually translated into an improvement of their educational outcomes. As many girls as boys, if not more, finished secondary school in twice-reserved villages, and girls spent less time on household chores in twice-reserved villages than in never-reserved village.

So were these improvements in girls’ aspirations and achievements because of the inspiration provided by female leaders, or because of policy changes implemented by them? To answer this, the scientists used the professional outcomes of young adults in these villages as an indicator of improved opportunities. They saw that the gender gap in the educational and professional achievement between young men and women in twice-reserved villages was not smaller than in never-reserved villages. Therefore, the authors suggest, the reduction in the gender gap in adolescent aspirations and outcomes is most likely because female Pradhans serve as positive role models for girls and their parents, increasing their aspirations and hence female achievement.

Leave a comment

Filed under Economics, Social Science

Ignorance is bliss….

….or at least has the potential to promote democracy, according to a study published in this week’s issue of Science Magazine.

Building consensus among people with varying beliefs and preferences can be rather difficult. These difficulties can be significant obstacles in a democratic system where decisions are taken collectively, and multiple different opinions can impede the formation of a consensus.  Case in point- a paralyzed US Congress, which has been unable to agree on anything, whether it is raising the debt ceiling or cutting payroll taxes. However, such situations of impasse can potentially be dangerous, enabling an extremely opinionated minority to sway the decision in their favour, prospectively a less costly situation than a continued deadlock. But what sorts of conditions would allow a minority takeover? This is the question that Iain Couzin and his team of scientists tackle in their study. And contrary to the common notion that naïve or uninformed individuals are more likely to be manipulated by the extreme minority, the authors found that uninformed individuals can actually strengthen the position of the majority, and encourage a democratic consensus.

Using three different theoretical mathematical models to study group dynamics, Couzin and team found that if a group was unequally divided between two different targets, the group moved towards the majority’s target as long as the majority had a stronger preference for its target than the minority had for its contrasting target. In other words, as long as the majority had a stronger opinion than the minority, the majority got its way. If, on the other hand, the minority had a stronger preference for its target than the majority had for its own, i.e., the minority was more opinionated than the moderate majority, the group moved towards the minority’s target. However, if in their model, the scientists introduced individuals who were uninformed about the different targets of the group, the group once again tended to move towards the target of the majority. This suggests that the uninformed individuals tended to support the majority rather than become manipulated by an extremely opinionated minority. However, this was only true if the number of uninformed individuals was small. As the number of uninformed individuals increased, their effect on group consensus diminished.

But how well do these theoretical models apply to real live beings? To test this, the scientists conducted some experiments with golden shiners, a species of fish that demonstrate strong group behaviour. They trained two groups of fish to swim towards a different target each- a blue target and a yellow target. Turns out though that golden shiners have an inherent preference for the yellow target, thus making yellow the preferred choice in a mixed group. As a result, when the fish were mixed together, they all tended to swim towards the yellow target even when there were more “blue-trained” fish in the tank than “yellow-trained” fish. So the minority won in this situation because of a stronger group preference for yellow than blue. But when the scientists added untrained fish into the mix, the fish began to swim towards the blue target, the weakly preferred target of the majority of “blue-trained” fish. So just like in their theoretical models, the scientists saw that a small number of undecided individuals could sway the group decision towards the preference of the majority even if the majority’s preference was weaker than that of the minority.

The real question now becomes, does the same hold true in humans? Can people improve the workings of a democracy by being oblivious to the goals of that democracy? If so, I must say, I have a new found respect for ignorance. However, based on the results from this study, I imagine that only a very small number of ignorant individuals will be tolerated before democracy begins to fail again.

1 Comment

Filed under Biology, Evolution, Mathematics, Psychology

Shame, honour, and the public good

The concept of public shaming in order to discourage antisocial or undesirable behaviour has been around for centuries, whether it be through the use of medieval style pillories used for public punishment, or modern day tactics of forcing men to marry animals that they had illicit relations with. The opposite behaviour, i.e., honouring those who demonstrate prosocial behaviour in order to encourage such behaviour, is also equally prevalent in the form of giving coveted awards for intellectual, cultural, or philanthropic achievements.  But can these same concepts also be used to encourage public cooperation toward a common social good? This is the question addressed by a transatlantic group of mathematicians and biologists from Canada and Germany in an article published online this week in Biology Letters.

The scientists, led by Jennifer Jacquet, performed a “public goods” experiment on a group of undergrads from the University of British Columbia to determine whether the fear of shame or the promise of honour would encourage the students to be more cooperative. The game involved donating money towards a common pool that would be multiplied by a given factor, and the earnings of which would be split amongst the students at the end of the experiment. Thus, the more each student contributed, the more their group would benefit. The students were chosen from the same class so that they would interact with each other often throughout the semester, thus making their reputations as generous or stingy rather important. The scientists split the students into 10 groups of 6 players each. They divided the 10 groups into the control group, the shame group, and the honour group. They gave each student $12 at the start of the game. At each of 12 rounds of the game, the students would have to decide whether they would donate $1 of their starting capital or nothing at all to the common pool. In the control group, all donations were anonymous. In the shame group, the students were told that at the end of the 10th round, the 2 lowest contributors would be exposed, whereas in the honour group, the 2 highest contributors would be named. Then the scientists tracked differences in levels of contribution, and hence cooperation, across these 3 groups. Not surprisingly, at the end of the 10th round, average donations in the shame and honour groups were significantly higher than in the anonymous group. Therefore, the fear of shaming and the allure of recognition gave the former two groups more incentive to be generous, whereas, in the absence of any incentive the students in the anonymous group decided to be selfish and try and mooch off the contributions of others within their group. Interestingly however, fear of shame was not a more powerful incentive for generosity than the desire for honour, as the average donations in the shame and honour groups were similar. Even more interesting is that after the 10th round, once the 2 highest and lowest contributors had been revealed, donations in the shame group fell drastically, presumably because the threat off being shamed no longer existed. On the other hand, cooperation was maintained in the honour group in rounds 11 and 12, with the 2 honoured contributors maintaining high levels of donation, perhaps because they felt obliged to maintain their reputation. Therefore, both the fear of being shamed and the desire to be praised serve as equally strong inducements for group cooperation.

I wonder whether this tactic wouldn’t help improve some of the flailing economies of the world- at the end of each year the government should announce the names of the two highest and lowest tax payers among the top 1% of earners. New tax plan President Obama?

Leave a comment

Filed under Biology, Psychology, Social Science