Technology & Decisions (HL)

As you know well by now, humans are far from perfect decision makers.  When faced with tough decisions, we often fall back on the fast, easy answers provided by System 1.  And System 1 thinking is fraught with biases, which can lead to all sorts of nasty consequences - like marrying the wrong person, hiring the wrong employee, or buying the wrong product.

With the rapid advance in technology, we are now living in the age of "big data".  Computers can store and process vast amounts of information - to name just one example, Facebook can generate a remarkably accurate prediction of your personality, interests, political views, and consumer habits, just on the basis of the friends in your network, the photos you upload, and the posts you have "liked".  All of which leads to a fascinating question - can computers make better decisions than people can? 
Think Critically

What happens when you send off your university applications?  How do universities decide which students to admit, and which to deny?  Watch the video below for a look at the admissions process.

As you saw in the video, a group of admissions officers will sit around a table, and discuss each applicant, one by one.  They will look at your grades, pore over your letters of recommendation, and read aloud your personal statement.  

As you may well be aware, this process is far from perfect.  The admission officers might read your application just before lunchtime, when perhaps they are hungry and exhausted.  Or perhaps they read your application immediately after a true superstar student, making you seem rather ordinary in comparison.

Perhaps there is a better way.  A computer could analyse incoming GPA, activities, and letters of reference from previous students, and see which variables correlate with success in university.  Then, the computer could generate an algorithm that would be used to either accept or reject all new applicants.  Your university application would simply be scanned by a computer, which would identify key variables and make an instantaneous admissions decision.  No more biased humans - every decision would be made with the efficiency of a mouse click.

Reflect on the following:

  • Do you think that universities should use computer algorithms to accept or reject applicants?  Why or why not?

  • Would you want your university application to be read by a group of people, or a computer?  Explain why
Interviews & Overconfidence

One of the most important decisions for any business is who to hire.  Hiring the right person can invigorate an organization, inject fresh ideas, maybe even turn around a struggling company.  Hiring the wrong person, on the other hand, can be a disaster - a bad hire could make costly mistakes, cause conflict in the workplace, or even sabotage the company.

For most hiring managers, just looking at a resume isn't enough.  There are always people that look good on paper, but might be awful employees.  So hiring manages typically conduct some sort of face-to-face interview, in which they attempt to get to know the job applicant, get a feel for their personality, and decide if this person is likely to make a valuable contribution to the company.

Face-to-face interviews seem like a good in theory - except that study after study has found that informal interviews are all but useless in determining who will be a good employee.  Some people are great at being interviewed, yet turn out to be horrible at the job, and vice versa.  And yet, most managers continue to rely on interviews, believing that they can use their intuition to choose the right person for the job.  This is a typical example of the overconfidence effect -  having too much confidence in the accuracy of your judgement, even when the facts suggest otherwise.  For one example of how overconfidence in first impressions can result in poor judgement, see the study below.
Research - Dana et al

Aim: Investigate the role that interviews play in decision making


  • Participants were university students.  They were given information on other students' course selection and past GPA.  (Participants were told that past GPA is the best predictor of future GPA).  Then, they were asked to predict the students' future GPA

  • In some cases, participants met and interviewed the students for whom they would make GPA predictions, while in other cases they did not meet the students, making the prediction based on course selection and past GPA alone

  • Among the interviews that took place, in half of the interviews, the students answered the questions honestly.  In the other half of interviews, participants were only allowed to ask yes/no questions, and the students responded dishonestly (randomly choosing Yes or No depending on the letters in the question)


  • Students made more accurate GPA predictions for the students that they did not interview.  Rather than being helpful, the interviews were actually counterproductive, making the predictions worse

  • None of the participants realized when the student they were interviewing were giving random answers.  In fact, when this happened, participants rated the extent to which they "got to know" the person as slightly higher than when honest answers were given!


  • These results support the overconfidence effect.  Participants were confident they could get an accurate impression of a person from an interview, and thus be able to predict their future GPA.  In fact, participants would made more accurate predictions if they did not interview the student

  • Even when the interviewee answered the questions randomly, participants somehow formed a coherent impression about the person from the random answers.  People can't help see the signal within the noise


  • This study has a strong experimental design, establishing a causal relationship between the independent variable (whether or not interviews took place) and the dependent variable (the accuracy of future GPA predictions)

  • All of the participants were university students, who did not have much expertise or experience in interviewing.  Perhaps professionals with years of experience in conducting interviews and hiring employees would be able to make more accurate predictions on the basis of the interview

  • This study suggests the limits of human judgement, especially when it comes to intuition and first impressions.  Perhaps predictions based on data alone are more accurate

Should the computer decide?

Human intuition is real, but often faulty.  Because of our System 1 biases, we often make poor decisions.  Unfortunately, we are usually overconfident that our decisions are accurate, even when they turn out to be wildly off the mark.  This suggests that computer algorithms might be capable of making more accurate decisions than people, especially when there is lots of data available.  According to Andrew McAfee, writing in the Harvard Business Review , human intuition is often deeply flawed, and algorithms have proven to be more accurate than experts in predicting who will commit suicide, how long someone will stay in a job, what GPA college students will get, even in diagnosing breast cancer from a scan.  Paul Meehl, who spent his career researching the performance of algorithms versus human experts, concluded that "w hen you are pushing over 100 investigations, predicting everything from the outcome of football games to the diagnosis of liver disease, and when you can hardly come up with a half dozen studies showing even a weak tendency in favor of the [human] clinician, it is time to draw a practical conclusion.”  Below is a description of one such study, comparing algorithm-based decision making with human intuition.

Research: Hoffman et al

Aim: Compare the hiring decisions of human managers with computer algorithms


  • Research was carried out across 15 businesses who employ low-skilled service workers (like fast food restaurants).  These jobs typically have high worker turnover, which is expensive for business (they must take time to find, hire and train a replacement)

  • A computer algorithm was used to predict the job performance of 300,000 job applicants, based on just a few questions about their skills and personality.  The algorithm sorted applicants into three categories: green (high potential for success), yellow (medium potential), and red (low potential).

  • However, hiring managers were still allowed to "overrule" the algorithm and hire someone from one of the lower categories (yellow or red), if they felt that these applicants were actually likely to be good employees


  • The algorithm's predictions were shown to be statistically significant.  The employees rated "green" stayed on the job an average of 12 days longer than the "yellow" employees, who in turn stayed on the job an average of 17 days longer than the "red" employees.  (The median length of job stay was 99 days)

  • When a hiring manager "overruled" the algorithm and chose a "yellow" candidate over a "green" one, the yellow candidate still ended up staying on the job for 8% less time than other "green" candidates


  • Computer algorithms can make accurate predictions about employee retention based on a few simply questions

  • Human intuition is often counter-productive.  When managers used their intuition to override the algorithm, their judgements turned out worse than if they just followed the algorithm's decisions.


  • This study involved thousands of applicants in actual businesses, making ecological validity high.  
  • This study only involved low-skilled service workers, so it remains to be seen if an algorithm could be used to predict employee performance for other types of jobs

  • This study only measured how long employees stayed in their jobs, not other factors related to job performance (like customer satisfaction), which might be more difficult to quantify and predict accurately
Evaluating computer-based decision making

  • Make a list of advantages that computer algorithms have over human decision makers

  • Make a list of the limitations of computer-based decision making


  • I can discuss some of the limitations of human decision making, including the overconfidence effect

  • I can describe the Aim, Procedure, Findings, and Conclusion of Dana et al, explaining how this study demonstrates the limits of human judgement.  I can also evaluate the study

  • I can explain why (in many situations) computers may do a better job of making decisions than people

  • I can describe the Aim, Procedure, Findings, and Conclusion of Hoffman et al, explaining how this study demonstrates the strength of algorithm-based decision making.  I can also evaluate the study
  • I can discuss the strengths and limitations of using algorithms to make decisions
Quiz Yourself!

1.  Which statement best summarizes the results of research by Dana et al?

(a) Experts are better at making predictions than university students

(b) When making GPA predictions, it is best to take into consideration all of the available information, including past GPA and performance on the interview

(c) Interviews are effective when questions are structured and the same questions are asked to each student

(d) Informal interviews are not effective in estimating a student's GPA, and a computer could make a better prediction by using quantitative data alone

2.  What was the independent variable in the research by Dana et al?

(a) Whether or not an interview took place

(b) The student's past GPA

(c) The student's predicted GPA

(d) The student's actual GPA

3.  In the research by Hoffman et al, what judgement can be made about human intuition?

(a) Although algorithms can be effective, the research by Hoffman demonstrated that sometimes intuition can be more accurate than a computer prediction

(b) Human intuition can make better holistic judgements than a computer algorithm

(c) A combination of machine decision making and human intuition produces the best judgements

(d) Decisions based on intuition are consistently worse than decisions based on data alone

4.  What is a limitation of the research carried out by Hoffman et al?

(a) The study had low ecological validity, as it was based on a computer simulation

(b) The study involved only low-skill employees, so the findings may not generalize to other types of hiring decisions

(c) The study involved a relatively small sample size, making it difficult to generalize the findings

(d) The study took place in a laboratory, so ecological validity is low


​1 - D, 2 - A, 3 - D, 4 - B