When less is more

One of the questions that every theorist stresses over when constructing a model is how complicated the model should be.  The correct answer, of course, is that the model should be exactly as complicated as it needs to be, but no more.

Try this thought experiment.  Look at a glass of water sitting on a nearby table or desk.  Imagine pushing the glass over the edge, and try to follow the trajectory in your head.  My guess is that you can visualize this process fairly well.  You might even be able to picture the shards of glass scattering when it hits the floor, or the shape of the puddle that the liquid ends up making.

We can picture this process so clearly because in our lifetimes we have seen many things fall.  Sometimes the object was a glass of water from a table, sometimes it was a pencil from a desk, sometimes it was an apple from a tree.  And using these observations, we developed a model that allows us to predict how things will fall before they even begin to move.

In the thought experiment above, a glass of water was pushed from a table.  Does the trajectory of the fall change if the water was replaced with beer?  What if the liquid was dyed green?  What if the glass was dyed green?  If the goal is to predict the trajectory of the falling glass, these details don’t matter, and they can be left out of the model.  Interestingly, leaving these details out of the model makes the model more general, and in turn more useful.  I mean, how often do you push a glass of green water off of a table?  What good would that model be?

A hidden benefit from fleeing the ivory tower?

Continuing on the theme of communicating to journalists and the public (brought up by David and Megan), I thought it might be useful to bring up some insights from a talk I attended yesterday. Nancy Baron, author of Escape from the Ivory Tower, and director of science outreach at COMPASS, is a zoologist and writer who coaches scientists on the merits and how to of communicating their research.

We tend to get fixated on the disasters that sometimes occur when science gets haphazardly, awkwardly or incorrectly conveyed to the public, so it was slightly refreshing to hear Nancy talk about some success stories.  However, the story that really struck me (somewhat selfishly) tied into the very first point she made during her talk. She began by making the bold claim that practicing communication of science actually improves our own science.

She made the comment initially and I kind of assumed it was one of those things you might say to further motivate scientists. I mean, come on, she has an agenda too. Something akin to “eat your peas because they are actually good for you.” But she shared a story later in her talk about being in a bus for 8 hours with a group of scientists and forcing them each to write a message box (a way in which to condense your scientific message to make it understandable by others). She then had them switch seats periodically to get everyone to explain their box to the others. The result? New insights, clarifications and switching of individual agendas.

Why did this happen? Scientists speaking the language of science often get stuck on the details, especially if they’re relating them to colleagues in their own fields. Forcing investigators to zoom out and analyze what they see as the most important part of their work enables a different kind of discussion. The kind that may not be useful in identifying the incorrect use of a statistical test, but instead an incorrect focus or the clarification of a goal or even formation of a new direction for the research to take.

Maybe another reason for taking the plunge?

Bad science metaphors: Like a prom dress made from carpet remnants?

In one of my very favorite Futurama episodes, it’s revealed that the main character, Frye, is immune to stupefaction rays because he has cobbled together a random assortment of electromagnetic waves into a working brain, like a prom dress made out of carpet remnants. Although it’s science fiction, the prom dress is a perfect metaphor (or simile if you want to nitpick), and it makes me wonder why it’s so difficult to find good metaphors for real science. So instead of the blog post on circular variance I was supposed to write, I’m going to talk about a news article Dave sent around regarding climate change and what 95% certainty means.

The article is not kind to scientists, remarking that even when we scientists know things, we can’t say that they are happening with complete certainty. (For the record, I’m comfortable saying with 100% certainty that the sun will come up tomorrow.) The part that bothered me the most was the paragraph that took a good metaphor and got it backward:

“Some climate-change deniers have looked at 95 percent and scoffed. After all, most people wouldn’t get on a plane that had only a 95 percent certainty of landing safely, risk experts say.”

I’m sure that the first sentence is true, and the plane metaphor isn’t bad–though the evidence that smoking causes cancer is a more perfect parallel–but it should have been a plane that has a 95% chance of crashing if we do nothing, and a 5% chance of landing safely. Given the consequences of doing nothing, most people would choose to do something even if the odds of a crash were much lower.

So how did the metaphor go so wrong? It’s clear that scientific findings (especially of this magnitude) will be distilled into metaphors–whether scientists take part or not–and that some of the complexity will be lost in translation. A catchy metaphor will take on a life of its own, while a lucid summary of the science may be quickly forgotten, so metaphors are both useful and dangerous. Scientists are uniquely qualified to choose good metaphors, but how can we choose wisely? (And incidentally, will my tendency to compare a particular malaria strain to Mary Poppins come back to haunt me? The metaphor works on many different levels, I think, but that’s no guarantee.) Misleading rhetoric can be surprisingly durable, not unlike a prom dress made from carpet remnants.

A meaningful life

Do you want a meaningful life or a happy one?

Though the meaning of life might be a little heavy for the lab blog, I’d still recommend reading this article. One of the central themes of the piece is that although happiness and meaningfulness overlap, they are not the same. As doctors of philosophy or on the way to this distinction, a little philosophical discussion seems appropriate for the blog.

If you aren’t inclined to read the whole thing, search for the word “research” on the page – it’s about us. Choosing to devote a majority of our lives to conducting research, we have chosen a life rich in meaning. The flipside is that because experiments often do not go as planned, our collective foray into the great unknown is inherently stressful, which can decrease happiness. Happiness was described as more of a fleeting and perhaps empty enjoyment – one comparison is the difference between a working life and a life after retirement. Retirees are often happy, but didn’t feel their lives were as meaningful as when working. (As an aside, often retirees will volunteer their time – an activity that enriches the feeling of living a meaningful life).

Just something to think about, and maybe discuss at the pub.

 

 

Strange & Bountiful

This summer, I have been fortunate enough to travel quite a bit. I have heard lectures, participated in workshops and met innumerable numbers of interesting, talented, fun folk. Along the way I’ve learnt many lessons, both formally and via pattern recognition, from why male fish guard their eggs more than female fish; to how not to give a talk; to how to track down your favorite speaker at a conference without seeming creepy (unfortunately the latter involved some trial and error). All these things are eminently bloggable but I’ve taken strange results as my subject because their pursuit has emerged as an important determinant of scientific success.

As tempting as it may be to ignore it, that annoying little spot could be the making of you. Or not.

We all know that penicillin was discovered by accident but, in an age when ‘hypothesis driven research’ is the mantra, its easy to think that science doesn’t work like that anymore. Yet I heard the same story, over and over, from now eminent scientists.  At EMPSEB Laurent Keller, himself quite the character, told how he discovered that male Little Fire Ants reproduce clonally. This finding revealed the importance of sexual conflict in the evolution of genetic systems and shed light on the dynamics of ant societies. Japanese scientists had the same data as him, years earlier, but had ignored it as an outlier – blinded perhaps by their hypothesis and statistical methods that emphasize averages over outliers. He suggested that most scientists would do the same as his unfortunate predecessors. Then, along comes Nobel Prize winner Peter Agre, telling a similar story. ‘Look at the things that don’t fit’, he said.

It was a relief to me when Andrew asked Peter the question that these stories always leave me with (to paraphrase), ‘how can you tell what’s crap and what’s strange and interesting’? The ability to filter ideas, even conventional ones, is one of things I struggle most with. Yet, numerous people I’ve met have said it is the skill in science. Knowing which ideas to pursue, which experiments are interesting, timely yet doable can make the difference between a high impact publication and something very hum drum. Thankfully, the advice I’ve accumulated on dealing with strange results turns out to be surprisingly simple…

1) Confirm the strange phenomenon. Do the experiment again and again. Is the strangeness still there?

2) Talk to an array of people about the strange phenomenon. A discovery in one discipline can be blindingly obvious to people in another.

3) Pursue the strangeness slowly and carefully, with an open mind, even to the idea that its actually nothing. Be courageous if it takes you in a direction you aren’t familiar with.

4) Never hang your hat on strangeness. Have a ‘portfolio’.

This final lesson, which sounds so much like management speak, is the one piece of advice common to almost every scientist I’ve met.

So strange is good, in small doses.

Taking survival of the fittest to the next level

Spindler - one of the first "three parent" primates

Our current biotechnological skills are so advanced that we, as a species, can artificially select and even rebuild our own individuals. Have we outsmarted nature and has selection for intelligence led to the ability to increase our fitness as a species through artificial selection of the fittest individuals? Or will we eventually be caught up by evolution and realize that tinkering with genomes does not come without a cost. Either science or time will tell.

The so-called test tubes babies were the first steps in this process. Nowadays the technique is more advanced and selection of genetically healthy embryos from the sick ones is among the possibilities. This works well when genetic diseases originate in the nuclear DNA, but what to do when a disease is located on the mitochondrial DNA? Now we are facing a problem as maternal mitochondria are always passed on and thus selection is no option. The solution: replace the faulty mytochondria from mom with a healthy set of a donor, a solution dubbed in the media as “three parent babies”.

There are some clear ethical issues with this than I do not plan to discuss. What I’m more interested in, what are the biological consequences? Three evolutionary biologists raised their concerns today in Science. After all, for millions of years selection has been at work to optimize the crosstalk between nuclear and mitochondrial DNA, and for all these millions of years, maternal mitochondrial DNA found itself with a copy of moms haploid genome. What happens when this is suddenly radically changed? And what happens when donor and recipient are not closely related? This is when animal models come in: let me introduce Spindler, the cute macaque monkey in the picture above is one of four of the first ‘three parent babies’. He and his peers seem to be fine, however, they are only three years old and their two moms were from the same troop. Only time will tell whether Spindler will go on and have healthy babies of his own. However, desperate childless couples with a heritable mitochondrial disease do not want to sit around waiting until Spindler reproduces. Neither does the United Kingdom, who wants to soon start to make the technique available to desperate parents-to-be.

What do you think? Madness? Or up for the parents to take the potential risk? Compared to the disease they otherwise impose on their child, the perceived risk might actually be considered low. But is it? There already exists a technique to circumvent the passing on of mitochondrial disease: simply using the whole donor egg. The only added advantage for the mitochondrial replacement technique is that the child would carry moms own genes. Selfish? Or is it simply using our intelligence to increase our fitness.

Why can’t I make up titles?

In my scientific career I have had to acquire many skills. I’ve had to learn methods, experimental design, analyses, and how to write a convincing story. I feel like I’ve done a pretty good job at learning these skills (although was we have discussed on this blog many times they are never mastered). One thing I consistently struggle with it titles.

I hate making up titles.

This applies to talks, posters, papers, and yes, even blog posts. I usually go for the most boring version and my coauthors add something more exciting at the beginning. Example of a Lauren title “Behavioral observations and sound recordings of free-flight mating swarms of Ae. aegypti in Thailand”. Examples of titles I had help with, “Sizing up a mate: variation in production and response to acoustic signals in Anopheles gambiae”, “Harmonic convergence in the love songs of the dengue vector mosquito”, “Manipulation without the parasite: altered feeding behaviour of mosquitoes is not dependent on infection with malaria parasites”.

I feel like I spend a huge amount of time deciding on these. Does anyone have a method or set of guidelines for making nice titles? Better yet! I need to give a talk encapsulating both my dissertation work on the role of acoustics in mating swarms and the effect of malaria parasites on mosquito host-seeking behavior. Does anyone have ideas?

Love Songs and Zombies? Vampires sing love songs but are not zombies?

Sigh.

Plato was on to something

In Book One of The Republic, Polemarchus and Socrates are arguing over the concept of justice, which may be more accurately translated from the original Greek as ‘doing right.’ As they discuss their ideas of ethics, morality, and wrong vs. right, interesting parallels emerge between the conversation Plato depicts here and topics of interest in our lab. Socrates is debating with Polemarchus about how to define a just person in different scenarios (from chess to boxing). He probes whether being just is sometimes a contradictory trait as the same action can be just or unjust depending on the situation. Here is where the conversation turns into one we might have in lab meeting:

“Does not the ability to save from disease imply the ability to produce it undetected?” (332-333).

This quote strikes me as similar to much of what Andrew says about drug resistance. What is our aim in aggressive drug treatment strategies? Is the aim to “save from disease” or “produce it undetected”? The difference is that in the former attempt to save from disease, we attempt to kill all the “bad guys,” where in the latter we strike a compromise: if the patient is “healthy” and not experiencing symptoms, we may not need to kill all the parasites but allow it to persist in Socrates’ “undetected” state. Is one “right” and the other “wrong”? It seems like the question Socrates was asking is still being asked by our experiments.

Reading Plato this week has also fortuitously coincided with my mandatory SARI (Scholarship and Research Integrity) meeting I attended on Monday. The more I delve into a historic perspective on ethics, the more I realize the timeless nature of our dilemmas. How do we act ethically? What is or isn’t moral? Whether in our experiments or our boxing or our chess games, the need to evaluate our morals and explore ethical concepts remains relevant. The discussion leader of the SARI session iterated the idea that there may not be a “right” answer but there are three other Rs: Reduce, Replace and Reevaluate. What she meant is that we need to continue the discussion and keep questioning our actions and protocols, because although many of the debates we have do span from Plato’s time, and seem unsolvable, we can arrive closer to ethical solutions than we were in the past by maintaining an active discussion.

Source: Plato. The Republic. Translated by Desmond Lee. Penguin Classics, 2nd ed. 1974.

More storytelling

I recently visited my grandmother with my father and uncle where, inevitably, the conversation turned to the time they lived in Brazil. As I struggled to understand the fast-paced Portuguese, one memory in particular caught my attention: my grandmother spraying the bedroom with a handheld insecticide sprayer as her sons prepared to fall asleep under their bednets. It must have worked because, as far as they know, none of them ever became sick with a mosquito-borne illness.

This story made me interested in finding some historical perspectives on malaria control. While I have read one or two papers on malaria in the past year, what I have in mind is not a data driven article or even a review, but an accessible narrative with stories about people. Something like The Great Influenza or The Family That Couldn’t Sleep. A quick Google search turned up The Making of a Tropical Disease: A Short History of Malaria (reviewed in the Lancet here), which I’ve ordered, but if anybody has any other suggestions I’d love to hear them.

Bonus link: How the U.S. stopped malaria, one cartoon at a time.

Version 36

Yesterday, Silvie’s paper finally got published. It is one of the most important papers to come out of the group, but I have never had such a hard time publishing a paper. It took 13 months and ended up in the sixth journal we tried. What finally got published was version 36. I do not recall ever going much more than version 20 for any other paper.

I summarize the sorry saga here. Perhaps because I turned it into a pedagogical exercise, some Higher Power decided to turn it into a lesson. If so, sorry Silvie. Coauthor Troy said today it was such a long process, he’d forgotten about the paper. Silvie replied: only now could she start to forget about it.

What was the lesson? Obviously we were aiming high journal-wise, and clearly we failed to persuade top editors that it was sufficiently important, novel or otherwise cool. So there was a marketing failure (which I still do not understand because you never get feedback). But when we finally got reviewer comments, there was not much between the negative and the positive. Nobody ever doubted the data, or the conclusion. Indeed, version 36 is pretty close to version 15.

I can’t help make the comparison with Vicki’s most important paper. Her paper is surely far more contentious than Silvie’s, and will less obviously stand the test of time. Yet it sailed into print in a higher impact journal. The only difference I can see is the humility of the rhetoric.

Social Media: Good or Evil?

Writing this blog is really really difficult for me. It’s hard to come up with a topic that I think would be relevant, and that people even want to read about. On top of that, a blog isn’t the type of platform to compose a 3,000 word litany about something that interests you, no matter how eloquent it may be. I also worry about my web presence, and the fact that this blog is written under my name, which can be traced to my lab page that future collaborators or employers may see. I worry about sounding uneducated, insensitive or boring.

Using blogging and other types of social media effectively is HARD. Which is why I was so surprised upon reading entries and comments on Andrew’s class blog that students think that short, online communication will be a cakewalk. It isn’t, I’ve read these blogs before, and they’re mostly terribly written and extremely boring. Shocking for a generation that communicates more and more exclusively electronically (I’m not removing myself from this category either). I have my fair share of a Facebook addiction) However, this is what I consider an “effective” use of social media. I think that blogging platforms, Twitter, instagram, reddit and Facebook can be extraordinary, free-access resources for public education and exchange of ideas. Blogging, tweeting, and sharing are some of the most phenomenal and powerful tools for disseminating ideas that we have at our disposal. We can use it to foster collaborations across continents, to educate for free, and to more fully realize our potential as a global, multicultural community.

#loljk

But, the truth is, most people don’t really use it for that. The initial blog post that Andrew assigned his class was mostly for the purpose of getting used to the blogging platform and its technicalities. A portion of the assignment was to embed a live link. I was incredibly shocked how many students linked to their instagram, Facebook and Twitter accounts. Most of these accounts were linked to their actual full names, and the content wasn’t private. There were a LOT of #scantilycladwomen and lots of #underagedrinking, documented for everyone to see (not here to judge anyone’s habits, but that stuff is online forever now). And they (being part of my millennial cohort, I can’t really separate myself) seemed desperate to share it.

Everything has its place, and just like there are many people who think political discussions don’t belong on Facebook, there are also many people who don’t think that everything you eat for every meal every day needs to have its own photography session. I’m not arguing there is a correct way to use social media, I just think sometimes its true potential is squandered. Perhaps once I can get blogging down to an art I can start summarizing my ideas in 140 characters or less!

Significant but not significant

I once took a class with a professor who said that if the sample size in an experiment had to be greater than ~30 to detect significance, the result didn’t matter.  As a young graduate student, I thought he was just being snobby about his field where those sample sizes were sufficient to publish.  It was only years later that I realized he was referring (I think) to the difference between a statistically significant effect and a biologically significant effect.

The term “statistical significance” means that for a specific hypothesis, the available data suggest that the hypothesis is false.  Many hypotheses, however, are so obviously false that experiments need not be performed.  A study of pigeons in New York City and London, for example, might ask whether the pigeon densities in these two cities differ.  With a small study the effect might be non-significant.  With a larger study it might be significant. But regardless, there is an exact number of pigeons per square meter in New York, and that number is different than in London.  No experiment needs to be performed to falsify this hypothesis.  As Burnham and Anderson would put it, this is a parameter-estimation problem, not a hypothesis-testing problem.  If the pigeon densities for these cities were very different, a small study would be able to falsify the original hypothesis.  If the densities were fairly similar, a much larger study would be needed.  But in either case, the hypothesis is clearly false.  Whether this difference matters biologically, however, is an entirely different question.

So when my professor said that he only cares about the results of experiments that can show significance with sample sizes of 30 or less, what he was really saying is that small differences are unimportant, and his rule of thumb for determining whether the effect is small is to look at the sample size in the experiment.  In many cases, I see his point, but there are also clear exceptions.  Perhaps the most obvious is in measures of selection coefficients.  Tiny differences in selection coefficients can have really dramatic effects over multiple generations, and it is very non-obvious to me how to quantify these differences with sample sizes of 30.

How to change science

I have just finished reading The Silwood Circle. It’s by an historian of science with a big interest in the philosophy of science. Despite that, I could hardly put it down. I found it riveting partly because I know the players involved, and partly because it is about putting math into ecology (and why that matters even though the models are largely heuristic). But mostly I could not put it down because the book is really about a bunch of men (all men), who set out to insert ecology into the heart of British science and The Establishment – and why they succeeded. I think it has lessons for young folk who want to change science – and older folk who want the next generation to change science.

Silwood Park is a campus of Imperial College London. In the late 1960s, Richard Southwood and slightly later Bob May set out to use Silwood to transform ecology, particularly British ecology. By the time I came on the scene in the mid to late 1980s, they had done it. It was achieved by picking the right people (smart, ambitious, sociable) and opening doors (career opportunities, prizes) once those people performed (which the anointed ones did). But more importantly, it seems, it was done by putting together people who shared a common philosophy about how to do science but whose interests and specific expertise were complementary within the group. They drove each other forward (as big egos do), but as part of an us-against-them mentality, not a dog-eat-dog approach. And my sense is they laughed and argued and socialized as a group, something which really glued them together. Together, they rode the 1970’s environmentalism into the upper reaches of the British establishment. Much of it because they hiked together. It might all hinge on the hiking.

I have heard the criticism that the book fails to acknowledge what happened elsewhere in the world at that same time. That is perhaps a little fair. I also wonder if the Southwood ambition and associated narrative look a bit clearer in retrospect. But to me, what is really missing from the book is an analysis of the impact of charisma. Several of the protagonists are (or were) some of the most charming, forceful, articulate, erudite, stylish, visionary, self-confident, stimulating people I have ever met in science. Add to that potent mix their ability to unite previously disparate subjects like pesticides, parasitoids, parasites, predators, pathogens, public health, plants and a whole lot of other p-words like parties and pubs, and well, ka-Pow.

In search of patterns to recognize…

I saw a very nice dress online recently, but my first thought was that I could totally count all of the dots with my newly-coded macro in ImageJ. I’m inordinately proud that I have figured out how to tell my computer to do what comes naturally to human beings—pattern recognition. Why make a program that does something I can do better? As I mentioned before, I have approximately 3000 images of malaria parasites doing their thing in a jar of red blood cells. While I’m great at recognizing a red blood cell from say, schmutz on the microscope, I’m not great at counting things in a timely or orderly fashion. I suspect that humans have not evolved to be efficient counters of dots, and so I’m leaning on my computer for help.

The program can handle clumps of red blood cells (but I have to count the parasites).

While I now have a working program, I’ve spent a long time trying to quantify how it is that I can tell that this blob circled in red is actually a clump of five red blood cells, or why I think the cell circled in purple has been infected by a parasite.

This process has led to a lot of existential musing: How does ImageJ decide which color is the background? How do I decide which color is the background? Richard Pinapati of the Ferdig lab had some great suggestions on how to approach the problem, and the online help was actually pretty helpful, since plenty of other people have had similar problems. Though I haven’t seen a program to count red blood cells, many have used ImageJ to automate the counting, or even the parasite-counting (in fact, this researcher has figured out how to count the parasites in a pile of red blood cells). Others have suggested collecting images with a cell phone-mounted microscope (!) and transmitting them to trained microscopists, and still others have thought of crowd-sourcing their parasite identification problems. In that vein, how many malaria parasites do you think have crammed themselves into this red blood cell?

Just like trying to figure out how many humans will fit into a phone booth.

As an unforeseen side-effect of the programming process, any image vaguely resembling a blood smear catches my eye, and I find myself pondering how to count things that no one has any interest in counting. Lauren has informed me that it’s creepy to keep asking people if they need to count things, so I’ll just say that if you have dot-like biological objects you’d like to count (mosquito eggs, oocysts, sporozoites), please let me know, and we can give the new program a whirl.

Some thoughts about blood

This image was lifted from the Radiolab page without permission. For other cool art by Jonathon Rosen check out http://jrosen.org/index.html

I’ve been thinking about blood lately. I’m not the only one interested in the stuff. There was a great Radiolab podcast about blood not too long ago that covers the human obsession with blood. Recently I’ve been a little obsessed with the question of why blood is so bubbly. Are our veins filled with soap? (Don’t worry, I’m joking).

I’ve worked with donated human blood to make aliquots for feeding to bedbugs, mosquitoes, and for the malaria culture. In doing so, I’ve noticed blood forms pretty stable bubbles very easily in at least two instances. The first is when it is mixed with air, such as in a pipette when trying to get the last bit of blood from the bottom of a vial, and the second is when mixed with water, observed when washing up. Why?

My first thought is that maybe it is because of the anticoagulant (CPDA-1) used in the blood bag for collection, but I couldn’t pick out any ingredients in that were a red flag for a bubbly reaction with air or water. Blood is really viscous, which might explain bubbles forming readily in a pipette, but not in water.

Sigma-Aldrich’s website on properties of blood was helpful and maybe a step forward to solving the mystery of the bubbles. Blood contains 0.9% inorganic salts, including sodium, potassium, and carbonate. I’m no biochemist but sodium carbonate and potassium carbonate can both be used to make soaps so maybe my joke that we have soap in our blood isn’t so far off the mark? I’d love to know.

Reaping the fruits of time well-spent on vacation

The beautiful mind-clearing Costa Brava

I have just returned to work from a month long vacation. Not counting maternity leaves (definitely NOT a vacation), this must be the longest time off work for more years than I can remember. Apart from it being the longest vacation, it is probably also the one where I thought about work the least, which would be close to zero. I can only advice everyone to do the same; I have never been this energized getting back to work.

Now, the real question is whether a break like this increases productivity in the long term and should be repeated on a yearly basis. Surprisingly, there is much less research published on this topic than you would think. The only long-term study within my awareness did show that not taking an annual vacation is costly for your health and is even associated with premature death. Talking about reduced productivity! However, most studies are focusing on short-term effects following short holidays (< 14 days) and these generally do not find a lasting positive association. One recent study tested the hypothesis that perhaps less than two weeks is too short to recharge your batteries and zoomed in on people escaping work for 2-4 weeks. The authors found a positive association with happiness and well-being in the holiday-goers while on holiday, only to disappear on their first day of work.

So much for my hypothesis. I guess it is reasonable to assume that my high energy state is not going to last for long. Thus, if you´ll excuse me, I´ve got some work to do before my batteries run out again!

Looking for a hiccup cure

I have had a case of chronic hiccups for the past ~ten years. Like sneezing, hiccup patterns are somewhat understudied. In my case and in most, hiccup occurrences appear to be random, sometimes coming at inopportune times, and sometimes associated with soda drinking, certain medical conditions or nerves. With half of my current lifetime having been interrupted by periodic diaphragm spasms, I have an interest, almost a hobby, in trying experimental treatments. Drinking water in various ways (upside down, through paper towels, without breathing for long periods of time) and taking magnesium supplements have all been unsuccessful. Some people advocate hypnosis, others suggest eating spoonfuls of plain sugar.  By googling information on hiccups one thing that becomes ever more apparent is the plethora of bogus-ness available on the internet. I am fairly certain that eating raw sugar, getting “scared silly”, tickled or distracted, drinking less beer or swallowing antacids are not going to help (My doubts might be based on having attempted a majority of these).

My interest in a hiccup cure has become sidelined by my interest in how the internet changes our approach to getting medical “answers” and has led to the spread of misinformation. What were once considered “old wives tales” become easier to access via internet and appear with somewhat more validity when backed by reputable internet sources. The Mayo Clinic’s webpage cites a potential cause of hiccups as being “sudden excitement.” Perhaps I am chronically “suddenly excited,” but what I think is more likely is that we have become victims of our persistent desire to explain things. This results in explaining things even when the explanation is inaccurate or lacks scientific evidence. The internet’s superficial and sometimes false claims about hiccup origins is a minor example, but on a grander scale the internet is largely making it more common for the spread of ideas that are not always backed by sound research. It is difficult to accept that we don’t know everything, that some things still defy explanations and that not all answers can be given by Sir Google.  It is amusing to create an explanation for things that may not have one, but until we find the correct answer we should be open about admitting the truth: we don’t know. Maybe someday we will know the answer* to why humans hiccup. Until then, I’m happy hiccuping without explanation.

*On a side- and positive- note, the evolutionary origin of hiccups is a subject of research and is being studied by Dr. William Whitelaw at the University of Calgary. He has suggested that hiccups may be an evolutionary remnant of our descent from a common ancestor with amphibians. Frog ribbits and human hiccups do sound vaguely similar.

Janet Teeple: bringing mosquitoes and sunshine everywhere she goes

The world has been pretty depressing lately (chemical warfare, teenagers randomly shooting joggers, attempts to kill police officers using school children as bate, a jerk dropping of his cat to fend for itself next to Merkle).  It is enough to make you think that there is not a whole lot of good left in the world.  As a counter to that notion I give you Janet Teeple.

Janet runs our insectary, transports mice, and general makes most of our experiments
happen.  Every single experiment I have ever run with Janet she has been 100% on point. I cannot begin to express how helpful that is. On top of being fantastically professional she does it all with this smile.

Anyone who has worked with Janet probably already knows how incredibly lucky we are to have such a thoughtful, skilled, and talented technician working for us. I would encourage everyone to let her know how much you appreciate her this week.

Above the Arctic Circle

Andrew made the claim a couple of days ago that the amount of logistical investment in a project is inversely correlated to productivity. His comment was more tactful goading than anything else, as I recently got to spend some time with researchers in Greenland and he encouraged me to blog about it (sidenote: PIs know the best way to motivate grad students can be to make outrageous claims).

A clear bright day in August near Kangerlussuaq, Greenland

Does the amount of planning involved in carrying out field work siphon off energy and resources that could be allocated elsewhere? Does working in a remote area of the world necessitate a simplification of target goals and questions?

Nah. The phrase most used by biologists and ecologists comes to mind: “it depends.” Some laboratory experiments can turn into ball-busting logistical nightmares as well. There are certainly different means by which people approach solving problems in the field versus in lab, but failure can happen in a remote pocket of the jungle or right at home, as can sweet success. It’s really just up to the researcher who’s doing the probing and a bit of luck.

What I found really fascinating about the research in Greenland was that it coupled real time evaluation of ecological changes with the testing and advancement of fundamental ecological theory. It sounds like a trivial thing to do, but under Andrew’s false premise, maybe more challenging than in a laboratory setting, where oftentimes we know more about the system and how it functions.

One of the projects that I got to learn about involves understanding how the timing of vegetation emergence affects calf production of muskoxen and caribou in West Greenland. The case they make is that it depends on specific traits of the population that place them somewhere on the scale of capital to income breeders. The more “capital” the breeder, the more reliance on stored energy for reproductive performance. Conversely, the more “income” the breeder, the more energy derived from resources during the breeding period. The muskoxen of West Greenland tend to be more capital breeders versus income, while the caribou are the opposite.  Indeed, muskoxen calve 1-2 months before plants begin growing each year, while caribou calve during peak growing season.

A small group of muskoxen

You might expect then that caribou calf production might be more sensitive to the timing of vegetation emergence during the breeding season than muskoxen. And that seems to be the case! As warming advances the start of “green-up” in the Arctic, it can cause a mismatch between plant availability and caribou calving (up to 14 days advancement of the plant growing season and the day of peak calving).

Interesting, while muskoxen calving success isn’t able to be predicted by the timing of vegetation growth of the current breeding season, it does seem to be influenced by that of the previous season.

The biological story in itself is interesting, but I also think that the ideas that flow from it, ie: how variation in pressures from bottom up processes, mediated by abiotic factors influence reproductive success differentially based on life history strategies is pretty cool and add to a better understanding of the system by taking account its complexity.

Why do I like that? Because we do that too! We consider the complex ecology of within host interactions and the life history strategy differences between different medically relevant phenotypes to look at how parasites evolve.  We just happen to work in less scenic locations…..and last week’s trip only served to exacerbate my self-acknowledged heavily romanticized vision of field work.  Greenland was beautiful!

Edge of the massive ice sheet that covers 90% of Greenland