Category Archives: Statistics

How to Get a Job That You Don’t Hate

First of all, if you have not seen the movie Office Space, stop reading this blog post and watch it. I will wait.

Welcome back.

I am often asked to give career advice. This is strange since I don’t think I ever had a career. I had jobs, some terrible, some pretty good ones. I started a company in 2010. I am starting another one now. But a career, never. OK, so with that out of the way …

Earlier this month I was invited to discuss job search strategies with students in the MA in Statistics program at Columbia University. After the discussion, I posted the following blurb on my Facebook page.

Talking about careers in data analysis and stats with students in the MA in Statistics program at Columbia. My key messages: 1) if you want to work for banks, make sure you know what you are getting into; 2) think of an interview as a two way street: they interview you, you interview them; 3) if you hate your job, quit (if you can) and don’t worry about what it would look like on your resume; 4) don’t apply online, get a referral, go to meet ups, etc.; 5) learn some Bayesian stats — you will be a better human and know more than most of your peers.

I thought it would be useful to people if I elaborated on these a bit so here it goes.

If you want to work for banks, make sure you know what you are getting into

A lot of students in the MA in Stats program want to work for banks. I am not sure why that is but it must have something do with the geography and expectations of high earnings. Whatever the motivation, it is a good idea to know what you are getting into. Not everyone hates working for banks, but in my experience, technical people who end up working there are not very happy. I think they find that the culture does not agree with them very much. My advice is always to ask to speak with your potential future peers and ask them, the future peers, about three things they love and three things they hate about their work. You would be surprised what you will learn. Having said that, I have met people on the “business” side of banks that absolutely love it. Like with anything else do your research and make your decisions based on conditional probabilities, not population averages.

Think of an interview as a two-way street: they interview you, you interview them

This should be obvious, but most people don’t do it. The thing to recognize is that there is an inherent risk asymmetry between you and your prospective employer. You are just one candidate or employee out of many. They can make a mistake with you and they would probably be ok, but you are about to commit several years of your life to them (in expectation) and so you should be the one doing the interviewing! Of course, the realities of the sparse labor market is such that usually, you need them more than they need you, and so the roles are flipped. This fact, although daunting, should not deter you.

You want to find out what it would be like spending most of your waking hours at a job you do not yet have. This is not easy. To get started, make a simple two-category list: 1) culture; and 2) technical. For example, if you want flexible working hours, put that in the culture column and if you just must program in R, put that in the technical. Once you are done making the list, rank order the items. Do this before you take any interviews. After the interview, try to score the prospective employer along those dimensions. Where is the money column, you ask? That part is easy: know your minimum number and don’t be afraid to let them know what that is … but be reasonable, which means know what the market is paying and where you are on the skill / experience curve.

If you hate your job, quit if you can and don’t worry about what it would look like on your resume

Some jobs are just plain awful. If you do what I recommended above, you will probably avoid most of those, but every now and again one will creep up on you. What to do? Quit! Sure, this is easier said than done, but at the very least immediately start looking for a new job and make some notes about how you were duped with this one. Introspection is a great tool and I use it often.

A friend of mine spent years working at a company for a horrible boss and even though he eventually quit he still has emotional anxiety over the whole affair. Life is way too short to work for assholes. Get out now. But what about the resume, you ask, and I answer: if you are a technical person, github (or something like that) is your resume.

Don’t apply online, get a referral, go to meet-ups, and so on, but I am sorry I can’t refer you because I don’t know you

When I was working for a bank we had an opening for a business analyst. Now, here is the thing: business analyst does not analyze the business. What does she do? She writes requirements for a proposed piece of software. Anyway, that’s beside the point. When this job was posted by the HR department we received over 200 resumes! I don’t remember if we hired anyone, but you can imagine your chances of getting such a job. (Well, you can just compute them, but whatever.) The short story is, don’t apply online.

The best jobs I ever got were referred to me by my friends and classmates. Meetups are also some of the best places to get technical jobs. New York Statistical Programming Meetup is a great one for stats people and they often advertise jobs during their events. Another great way is to start contributing to some open source software. Where can you find great open source projects? Github, of course.

But Eric, why can’t you introduce me to some of those friends of yours that have all these great jobs? The truth is that they will not be my friends for much longer if I started doing that and you should not do it either. Your referral is a reflection on you — use it wisely and only introduce people you know well.

Learn some Bayesian statistics — you will be a better human and know more than most of your peers

When I was getting my MA in Stats at CU, they did not have a masters level Bayesian class. This is a tragedy of modern statistical education, but things are getting better. My friend and co-founder Ben Goodrich is teaching an excellent Bayesian class for masters students in the QMSS program. The stats department also offers the class and Andrew Gelman teaches a PhD level Bayesian course. If you are not at Columbia, Coursera recently started offering Bayesian classes. This one looks particularly interesting.

So why all the hype about Bayes? It’s a long story, but here were my initial views on the subject. I now work exclusively in the Bayesian framework. In short, Bayes keeps me close to the reasons why I fell in love with statistics in the first place — it lets me model the world directly using probability distributions.

Even if you are not a statistician you should learn about conditional probabilities and Bayes rule so you do not commit many common fallacies such as the prosecutor’s fallacy, especially if you are a prosecutor.

Bonus feature: why do you want a regular job anyway?

Recently I was on the Google hangouts call with a friend of mine who works as a contract programmer. His arrangement with  the company is that he works remotely. For most people remotely means working from home. For him, it means working from anywhere in the world. Right now he lives in a small apartment in Medellín, Columbia. He showed me the view from his window. It looks approximately like this:

2670489_full-lnd

To quote Tina Fey: I want to go to there.

The idea that an employer dictates both the hours during which you must work and the location of where the work must be performed is somewhat outdated. Sure, there are lots of jobs out there that legitimately require this kind of commitment, but it is no longer the norm. Take a look at that culture column I mentioned before and see where you stand relative to hours / location flexibility and choose accordingly.

Note to people seeking H1B visa

A lot of people I speak with are in the US on F1 (student) visa. It is really tragic that the US does not award work visas to foreign graduates, but this is unlikely to change anytime soon. The common misconception is that you need to find a large company to sponsor your H1B (work visa). You do not. Lot’s of small companies can and do sponsor H1s. When I was working for a small startup in San Francisco in the mid-90s, we sponsored several H1Bs for Eastern European immigrants. The key is finding an experienced attorney who processes many applications and ask her for advice. Reputable attorneys will not charge you for the initial consultation.

If you have any other questions, please ask them in the comments.

primateIn 2012, I wrote a post about how to learn applied statistics without going to grad school. I still think that one does not have to spend a large amount of money to acquire the skills necessary for data analysis. What has changed for me personally is that am finding traditional statistical methods, call them classical or frequentist, or evolved classical based on the Stanford Statistical Learning school or whatever, somewhat unsatisfying.

These generally rely on maximum likelihood estimation (MLE) to generate point estimates and asymptotic properties of estimators to come up with confidence intervals. One of the main issues I have with this approach has nothing to do with MLE versus Bayesian full posterior per se. It has something to do with the fact that the Likelihood function is largely hidden from my view, although there are lots of others issues, some of which I hope to discuss when my understanding sufficiently progresses.  I am getting too comfortable just running glm(), ok, not glm() since there is no regularization there, but say glmnet or Random Forest or even bayesglm in R.  The latter is of course Bayesian, but still a black box.

I am not sure at this point if I am ready to abandon all the mathematical and algorithmic machinery of Lasso, Random Forests, Gradient Boosting Machines, and so on, but I would like to spend more time thinking and expressing models directly rather than running and tuning abstract algorithms. I am also quite certain I don’t want to write my own model fitting, sampling, and optimization procedures.

Since I would like to approach this problem in a Bayesian way, it also means that my goal is to get to the distribution of the parameter vector \theta given data y, p(\theta | y), the posterior. In the Bayesian framework, we still work with the likelihood function p(y | \theta), but we are not trying to find some unique set of parameter values for which it is maximum (i.e. under which y are most likely.) Instead we want a complete picture of the uncertainty in our parameters that is supported by the data y, our choice of the model (i.e. likelihood, which as Andrew Gelman likes to point out is a form of prior knowledge) and knowledge about the parameters (prior distribution) without relying on asymptotic properties of estimators. In short:

bayes

Getting from prior to posterior is hard work unless they happen to be in the same family, which is rarely the case in the wild. The natural question then is where to start. Short of coding everything from scratch, which would be a very long project, even if I knew how to do it, two types of tools are in order: a probabilistic language capable of expressing models, parameters, priors, and their relationships and an MCMC sampler that can get us to the posterior distributions numerically. For a while, the best bet was some flavor of the BUGS language which uses Gibbs. But the state of the art has moved away from Gibbs sampling.  All the cool kids these days are playing with Stan which uses a more efficient, Hamiltonian MCMC with NUTS sampler and supports a broader set of models.

To get a jump start on Stan programming, I recently attended a class on Bayesian Inference with Stan taught by Andrew Gelman, Bob Carpenter, and Daniel Lee (thanks to Jared Lander for organizing the class.) I learned a lot and I hope to continue my exploration into Stan and Bayes.

* Thanks to Bob Carpenter for looking over the draft of this post and providing helpful comments.

First Two Weeks of Writing

Jacki and I just submitted the first two chapter to our publisher, so I would like summarize early lessons learned (actually we submitted one chapter, but the editor decided to break the chapters in half; a decision that we fully support.)  The chapters includes material on programming style (from R’s point of view), introduction to functions and functional programming, some information on S4 classes mostly from user’s perspective, vectorizing code, debugging and various methods of data access including web scraping and Twitter API.

First the obvious.  We underestimated the amount time required to produce the content.  No surprises there.

We spent too much time wrestling with the outline.  Outlining seems to work well when I know my own writing style, but not so well otherwise.  At some (earlier) point we should have just started writing and figured out the detailed chapter structure as we went along.  I suspect this will change as we get deeper into the material, but only time will tell.

What does need to be planned is the immediate section.  For me it helps to have all the code written and all the visuals produced prior to starting writing.  When I tried writing code on the fly, I struggled to make any meaningful progress.

Lastly, it would have really helped if we read each other’s sections more carefully both in terms of synchronizing content and writing style.  I hope that the final product does not read like the book was written by two people.

Onto Chapter 2.

 

Getting Ready to Write a Book

blog1

My co-author, Jacki Buros, and I have just signed a contract with Apress to write a book tentatively entitled “Predictive Analytics with R”, which will cover programming best practices, data munging, data exploration, and single and multi-level models with case studies in social media, healthcare, politics, marketing, and the stock market.

Why does the world need another R book?  We think there is a shortage of books that deal with the complete and programmer centric analysis of real, dirty, and sometimes unstructured data.  Our target audience are people who have some familiarity with statistics, but do not have much experience with programming.  Why did we not call the book Data Science blah, blah, blah…?  Because Rachel and the Mathbabe already grabbed that title! (ok, kidding)

The book is projected to be about 300 pages across 8 chapters. This is my first experience with writing a book and everything I heard about the process tells me that this is going to be a long and arduous endeavor lasting anywhere from 6 to 8 months.  While undertaking a project of this size, I am sure there will be times when I will feel discouraged, overwhelmed, and emotionally and physically exhausted.  What better vehicle for coping with these feelings than writing about them! (this is the last exclamation point in this post, promise.)

So this is my first post of what I hope will become my personal diary detailing the writing process.  Here is the summary of the events thus far.

  • A publisher contacted me on LinkedIn and asked if I wanted to write a book.
  • Jacki and I wrote a proposal describing our target market, competition, and sales estimates based on comparables.  We developed an outline and detailed description of each section.
  • We submitted our proposal (to the original publisher and two other publishers) and received an approval to publish the book from Apress’ editorial board. (Apress was not the original publisher.  More on that process after the book is complete.)

We set up a tracking project on Trello (thanks Joel and the Trello team), created a task for every chapter, and a included a detailed checklist for each task.

We have not completed all of the data analysis required for the book, so this is going to be an exercise in model building as well as in writing.  If you have any advice about how to make the writing process better or if you think we are batshit crazy, please, post in the comments.

I hope to write a book that we can be proud of.  We have a great editorial team and a technical reviewer who is kind of a legend in the R/S world.  They will remain anonymous for now, but their identities will be revealed as soon as they give me permission to do so.

I am looking forward to learning about the writing process, about statistics, and about myself.  Let the journey begin.

Joint Distributions or B Celebrity Sex in London

I remember the first time the concept of joint probability distribution was introduced to me I found it completely unintuitive (like so many topics in probability), declared myself too stupid to get it, and considered giving up on statistics.

The problem was that they used these silly gambling examples to demonstrate the concept.  Flip of the coin this, roll of the die that. Urrg. Once I reset it in the context that I could relate too, everything became easier.  So here we go.

Suppose you are a B level celebrity in London.  Then suppose that the probability of you having sex on any given day is ⅔ (it would be virtually 1, if you were an A level celebrity in LA, but that would not make for interesting example).  Also, suppose that the probability of a cloudy day is ⅘.  Here is how we write it in the language of probabilities.

X is a random variable tracking our sex patterns.  In this case, X can take on the values of X = Sex and X = NoSex.  Y is a random variable tracking weather in London during the day.  In our case Y = Cloudy or Y = Sunny.

It should be obvious that P(X=Sex) or P(X=NoSex) = P(Y=Cloudy) or P(Y=Sunny) = 1.  The probability of the whole sample space is 1 in both cases (you either have sex or not or it is either sunny or cloudy).  So P(X=Sex) + P(X=NoSex) = ⅔ + ⅓ = 1. By the same token P(Y=Cloudy) + P(Y=Sunny) = ⅘ + ⅕ = 1.

In this analysis, we assume that the sex is independent of the weather, an elusive assumptions and the one that should not be confused with correlation.  I will try to make the differences clear when I discuss conditional independence.  OK, here is the punch line.  When we say joint probability distribution, we mean the probability of the combination of the events in question.   For finite and discrete random variables such as the ones we are talking about, we can summarize the joint distribution using a table.

Sum = 1

P(X=Sex) = 2/3

P(X=NoSex) = 1/3

P(Y=Sunny) = 1/5

P(X=Sex,Y=Sunny) = 2/15 P(X=NoSex,Y=Sunny) = 1/15

P(Y=Cloudy) = 4/5

P(X=Sex,Y=Cloudy) = 8/15 P(X=NoSex,Y=Cloudy) = 4/15

In the above table when I write P(X=Sex, Y=Sunny) I mean the probability of both sex AND cloudy weather. (Do not confuse this with a notation P(X | Y), which means that we are only interested in Sex given a particular Weather outcome had already occurred.)  This is why it is called joint probability. The probabilities listed in the margins of the table, are called … marginal.  Each value in the cells is the product of two marginals. For instance the probability of having sex while it is sunny is ⅔ * ⅕ = 2/15. The cool thing is that if you observe only the joint probabilities you can easily calculate the marginals by summing across rows or columns (the reverse does not work – you can’t get joints from marginals, but you can get them from a good dealer in the Bronx.)

So, if I observe you, the celebrity in question, for 150 days having sex during 20 sunny days and 80 cloudy days (you were not having sex during the other 50 days, sorry), I may conclude that the marginal probability of you having sex in London is 2/15 + 8/15 = 10/15 = ⅔ = P(X=Sex) = P(X=Sex,Y=Sunny)+ P(X=Sex,Y=Cloudy), which is in fact consistent with our assumptions.  This can be formalized as follows:

P(X) = \sum_{Y}^{} P(X,Y)

This rule is quite general (it is called the sum rule of probabilities) and it says that for any joint distribution X,Y to get back the probability of X we have to sum across all possible values of Y.

Try summing across other rows columns and you will see that results are consistent there as well.

To plot or to ggplot, that is not the question

Producing informative and aesthetically pleasing quantitative visualizations is hard work.  Any tool or library that helps me with this task is worth considering.  Since I do most of my work in R, I have a choice of using plot, the default plotting library, a more powerful lattice package, and ggplot, which is based on the Grammar of Graphics.

There is usually a tradeoff between the expressiveness of the grammer and the learning curve necessary to master it. I have recently invested 3 days of my life learning the ins and outs of ggplot and I have to say that it has been most rewarding.

The fundamental difference between plot and ggplot is that in plot you manipulate graphical elements directly using predefined functions, whereas in ggplot you build the plot one layer at a time and can supply your own functions, although you can do quite a bit (but not everything) with a function called qplot, which abstracts the layering from the user and works similar to plot.  And therefore qplot is exactly where you want to start when upgrading from plot.

To demonstrate, the following R code partly visualizes the famous iris dataset containing Sepal and Petal measurements of three species of Iris flower using the built in plot function.

par (mar=c(3,3,2,1), mgp=c(2,.7,0), tck=-.012, las=1)
with(iris, plot(Sepal.Length, Sepal.Width, col=as.numeric(Species)+1, pch=20))
lbs = levels(iris$Species)
legend('topright', legend=lbs, 
       col=2:4, cex=0.7, pch=20, box.lwd=0.5, pt.cex=0.6)

One of the problems with plot is that the default plotting options are poorly chosen, so the first line of code fixed the margins, tick marks, and the orientation of the y axis tick labels.  The parameter col=as.numeric(Species) + 1 fixes the color offset at Red as opposed to the default Black.  Type palette() at the R prompt to see the default color vector.

The last complication is that plot does not draw the legend for you; it must be specified by hand.  And so, if you run the above code in R, you should get the following output.

It took a little bit of work, but the output looks pretty good.  Following is the equivalent task using ggplot’s qplot function.

qplot(Sepal.Length, Sepal.Width, data = iris, colour = Species, xlim=c(4,8))

As you can see, ggplot chooses a lot more sensible defaults and in this particular case, the interface for specifying the intent of the user is very simple and intuitive.

A final word of caution.  Just like a skier who sticks to blue and green slopes is in danger of never making it out of the intermediate hell, so is the qplot user will never truly master the grammar of graphics.  For those who dare to use a much more expressive ggplot(…) function, the rewards are well worth the effort.

Here are some of the ggplot references that I found valuable.

 

 

 

A Better Way to Learn Applied Statistics, Got Zat? (Part 2)

Earning a PhD for DummiesIn the second semester of grad school, I remember sitting in a Statistical Inference class watching a very Russian sounding instructor fast forward through an overhead projected PDF document filled with numbered equations and occasionally making comments like: “Vell, ve take zis eqazion on ze top and ve substitude it on ze butom, and zen it verk out.  Do you see zat ?”  I did not see zat.  I don’t think many people saw zat.

In case I come off as in an intolerant immigrant hater, let me assure you that as an immigrant from the former Soviet block, I have all due respect for the very bright Russian and non-Russian scientists who came to the United States to seek intellectual and religious freedoms.  But this post is not about immigration, which incidentally is in need of a serious reform.  This is about an important subject, which on average is not being taught very well.

This is hardly news, but many courses is Statistics are being taught by very talented (and sometimes not so talented) Statisticians who have no aptitude or interest in the teaching method.  But poor instructors are not the only problem.  These courses are part of an institution, an institution which is no longer in the business of providing education.  Universities predominantly sell accreditation to students, and research to (mostly) federal government.  While I believe that government sponsored research should be a foundation of a modern society, it does not have to be delivered within the confines of a teaching institution.  And a university diploma, even from a top school (i.e. accreditation), is at best a proxy for your knowledge and capabilities.  For example, if you are a software engineer, Stack Overflow and GitHub provide a much more direct evidence of your abilities.

With the cost of higher education skyrocketing, it is reasonable to ask if the traditional university education is still relevant?  I am not sure about Medicine, but in Statistics the answer is a resounding ‘No.’  Unless you want to be a Professor.  But chances are you will not be a professor, even if you get the coveted PhD.

So for all of you aspiring Data Geeks, I put together a table outlining Online Classes, Books, and Community and Q&A Sites that completely bypass the traditional channels. And if you really want to go to school, most Universities will allow you to audit classes, so that is always an option. Got Zat?

Online Classes Books Community / Q&A
Programming Computer Science Courses at Udacity. Currently Introduction to Computer Science, Logic and Discreet Mathematics (great for preparation for Probability), Programming Languages, Design of Computer Programs, and Algorithms.

For a highly interactive experience try Codecademy.

How to Think Like a Computer Scientist ( Allen B. Downey)

Code Complete (Steve McConnell)

Stack Overflow
Foundational Math Singel Variable Calculus Course on Coursera (they are adding others; check that site often)

Khan Academy Linear Algebra Series

Khan Academy Calculus Series (including multivariate)

Gilbert Strang’s Linear Algebra Course

Intro to Linear Algebra (Gilbert Strang)

Calculus, an Intuitive and Physical Approach (Morris Kline)

 

Math Overflow
Intro to Probability
and Statistics
Statistics One from Coursera. This course includes an Introduction to R language.

Introduction to Statistics from Udacity.

Stats: Data and Models (Richard De Veaux)

 

Cross Validated, which tends to be more advanced

 

Probability and Statistical
Theory
It is very lonely here…

 

Introduction to Probability Models(Sheldon Ross)

Statistical Inference (Casella and Berger)

Cross Validated
Applied and Computational
Statistics
Machine Learning from Coursera.

Statistics and Data Analysis curriculum from Coursera.

Statistical Sleuth(Ramsey and Schafer)

Data Analysis Using Regression and Multilevel Models (Gelman)

Pattern Recognition and Machine Learning (Chris Bishop)

Elements of Statistical Learning (Hastie, Tibshirani, Friedman)

Stack Overflow especially under the R tag

New York Open Statistical Programming Meetup, try searching Meetups in your city

Bayesian Statistics Not to my knowledge, but check the above mentioned sites. Bayesian Data Analysis (Gelman)

Doing Bayesian Data Analysis (Kruschke)

 

I dont know of any specialized sites for this. (Opportunity?)

Don’t Judge Your Ecuadorian Chef or How to Learn Statistics (Part 1)

There is a small Ecuadorian cafe right outside of my apartment building.  The beauty of living in Astoria Queens is that delicious, reasonably priced food is just a few blocks away.  The owner, Claudio, is a good buddy of mine and cooks up amazing dishes like Pernil, Ceviche Camaron, and other Ecuadorian goodness.  It is usually quite warm inside, with South American music playing in the background and clunking sounds of dishes and cutlery filling in the rest of the spectrum.

Beef Stew, Ecuadorian Style - $8 with Soup

Claudio’s brother Edgar helps run the place by chopping vegetables and occasionally stepping up to the stove.  The place is small, usually a bit crowded and you get this feeling that these guys earn every dime the hard way.  Always hustling, delivering lunches and dinners, but never forgetting to smile.  Recently I had the following conversation with Edgar.

“Enrique, cómo está, my friend?”, says Edgar as I enter the store.
(Everyone there calls me Enrique.  I think I might have said my name was Enrique, when we first met.  They also sometimes call me Che Guevara. So it goes.)

“Bien, gracias, ¿y tú?”
(This is the extent of my Spanish)

“Very good; jost getting the lonches ready”
(He knows, that was the extent of my Spanish.)

Then after a while.

“Enrique, I want it to ask you so’mthin. You mens’ioned that you studied stateestics. I want to ask you, what should I do to get into this field.”

I feel embarassed.  I never thought that he could be even remotely interested in the subject. So, like a total ass I proceeded with a question of my own.

“Oh yeah?  Have you taken any math classes?”, I ask sardonically.

“Well,  jes.  I have taken Calculus I through III, Probability Theeory, Leenear Algebra, Differensial Equasions, Real Analysis I, Real Analysis II, Heestory of Mathemathics, and lets see.  I have also taken Leenear Regression.  They let me take graduate classes at Hunter, since I have done so well in my undergraduate ones”, nonchalantly answers Edgar, while wiping off the butcher’s knife.

OK, now I feel like a special kind of ass – a dumb-ass.

“Dude, you are much more qualified than I ever was.  What can I do for you?”, I said trying to recover.
(I call a lot of people ‘dude’.  Even my daughter.  She once said: “I am not a dude daddy, I am a girl!”. “Whatever, dude”, came the reply.)

“Well, I was looking for graduate schools, but they are so espensive. I don’t think I can afford it.”

“You don’t need to go to graduate school, Edgar.  I mean you can, but I think that you can get the skills necessary without it and perhaps learn even more.”

When I came home, I thought about how one would go about learning the subject without relying on the traditional venues.  What if the person does not have Edgar’s math background?

The Teaser
The first question one usually asks when confronted with a new subject is why?  The answer is that statistics will make you a better human being, no more, no less.  As a side effect, it will help you make better decisions when confronted with uncertainty, which is pretty much always.

Here is a teaser.  Suppose you are given two sequences of Boy/Girl births: BGGBGG and BBBBBG.  Which one is more likely to occur?  If you think that the first one is more likely, you are in the majority.  You are also wrong, but not to worry.  A casual investigation of probabilities will clear this right up.

Since, I am in the baby mood, here is another.  A small rural hospital and a large urban hospital reported a ratio of boys to girls to be 0.70 on a given day.  Do you believe this?  Where is it more likely to occur?

Study after study shows, that we do not have the intrinsic intuition for probabilities.  Nature, it seems, did not prepare us to live in a highly uncertain world.  All of us need some help with this, some more than others.  For example, if the right answer to this problem, is immediately obvious to you, you are either a highly evolved human being or an alien robot.  In either case, I envy you!

The Tools
While discussing the subject, I purposely avoid terms like Machine Learning, Data Mining, and Data Science.  A lot has been written about the subtle differences.  If you care about that, here is one discussion thread.  Also, see a related post from Drew Conway here.

There are three main threads I see that are needed to be an applied statistician. (Theoretical statistics is largely an academic discipline, and if you are interested in it and live in New York take a look here.)

  • PROGRAMMING: Statistics is becoming more and more computational.  In the periphery, it is important to be able to prepare your data for analysis.  At the core, a rapidly evolving, but old discipline of Bayesian Statistics seldom offers analytical solutions and the ability to program simulations becomes very important.  Programming your own optimization algorithms, is also a pedagogical and rewarding exercise.  Do you need to be a computer scientist?  No, but it helps to think like one.  If you have never programmed, read the first chapter and ignore the references to Python.  More on programming languages later.
  • FOUNDATIONAL MATH: Despite Edgar’s love for Mathematics, a basic grasp of multivariate calculus, probability, and linear algebra is enough to acquire the skills and intuition necessary for applied statistics. If you want to dig a little deeper, Measure Theory, a branch on Real Analysis, helped solidify the foundations of probability. It is good to know it exists, but I would trust that it works, and put off the investigation till much later.  If all this sounds daunting, don’t be alarmed.  Anyone can learn enough math to understand what is going on.  (I did, and I am no mathematician.)
  • STATISTICS:  This will largely depend on the type of work you enjoy doing, but the core will include Linear Regression (like predicting income based on age, sex, education, and so on) and Generalized Linear Models (like classifying a tumor as benign or malignant, based on its size.)  I would also recommend Time Series (used in climate models, economics and finance, among others.) Understanding of Markov Chains and Markov Chain Monte Carlo (MCMC) will be helpful for Bayesian Statistics. This list go on and on, but even if you master the first sentence in this paragraph, you are way ahead of the curve.

If the above sounds like a bunch of gibberish, do not be deterred.  Because, so far I have not told you much.  I listed and named the areas of focus, but I have not described the method.  Where should I acquire these skills, using what mode of interaction, and in what sequence?

To give you a preview, I do not like the highly theoretical and sequential method used in the most graduate Statistics programs.  In the next post, I will describe an alternative.

In the meantime, I am off to Claudio’s cafe.   I hear he has lentil soup and goat stew cooking.  (While I eat, Claudio is trying to convince me to follow Jesus.  After sampling the food, I am starting to believe…)