I, Robot – or at least thinking like one: ISTE Standard #5 “Computational Thinking”

Image result for asimov foundation

“Students formulate problem definitions suited for technology-assisted methods such as data analysis, abstract models and algorithmic thinking in exploring and finding solutions.” ISTE Standard 5, indicator 5a (2016)

“This kind of thinking [computational thinking] will part of the skill set of not only other scientists, but of everyone else…computational thinking is tomorrow’s reality.” Jeanette M. Wing, “Computational Thinking” (2006)

“1. A robot may not injure a human being, or, through inaction allow a human being to come to harm.  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” The Three Laws of Robotics. Isaac Asimov, I Robot (1950)

Image result for i robot asimov

I would like to bookend my thoughts on ISTE standard 5 this week by using one of my all-time favorite science fiction authors, Isaac Asimov.  In 1950, when robots were still the stuff of science fictions and computers took up an entire room and had an internal memory of 1000 words, Asimov imagined walking, talking, almost-sentient robots who served mankind.  The genius of his story-telling was not in making talking, thinking robots – there have been countless, forgettable sci-fi stories about robots, but in the algorithm he had integrated into all of his fictitious automatons: the Three Laws of Robotics.  These were the fail-safe mechanism in the universe he created, and they were also the basis for all of the stories in I, Robot.  In a way, I, Robot is a book about computational thinking.

Computational thinking (CT) is useful for solving real-world problems and it trains the mind to think in a very specific way – a practical, logical, and beneficial way.  In fact, I think the world would be better off if more of us used CT on a regular basis to solve problems in our lives.  And the ISTE is with me on this. Their fifth standard for students is “Computational Thinking” and it advocates for “Students [to] develop and employ strategies for understanding and solving problems in ways that leverage the power of technological methods to develop and test solutions” (ISTE).  Brilliant.  Upon further reading, I came across indicator 5a, which states its intention to help, “Students formulate problem definitions suited for technology-assisted methods such as data analysis, abstract models and algorithmic thinking in exploring and finding solutions.”  This intrigued me, especially the first part.  What is this saying about the questions we should be encouraging our students to ask?  The expanded explanation in the ISTE site states that “formulate problem definitions,” means students should “Create and articulate a precise and thorough description of a problem designed to facilitate its solution, including conditions and constraints that must be taken into account.” The heart of my question for this standard revolves around this process.  Are we to encourage our students to frame their questions around a particular way of thinking?  One that is “precise” and leads to a “solution”?  In some ways, these goals sound contradictory to the “authentic” problems laden with “ambiguity” and “open-endedness” valued in ISTE standard 4.  Framing the question is important.  How the question is framed – how it’s conceived – can often determine the outcome.  If we are going to buy-in to a specific way of thinking and a specific way of framing the questions we ask, we should be aware of exactly what we are doing.  My question seeks to reconcile the two approaches mentioned in the standards and to find out more on how to apply CT specifically to my field of history.

A partial answer was located in one of the readings for the week, Barr, Harrison, and Conery’s piece on “Computational Thinking: A Digital Age Skill for Everyone” (2011). In this article, which advocates for teaching CT in K-12, they identify a number of “dispositions or attitudes that are essential dimensions of CT.”  Among these are “tolerance for ambiguity” and “the ability to deal with open-ended problems.”  How these manifest themselves in the in process of computational thinking is not clear.  In fact, they really say nothing about it, other than it’s an attitude.  In searching the article further, I came across the name of a woman who is at the center of the CT movement, Dr. Jeannette M. Wing.  The Barr article mentions one of her works on CT as “seminal” and so I decided to go straight to the source to find out more.


Dr. Wing’s article argues for the importance and inevitability of CT. She also explains what it is and what it isn’t. She writes, “it is conceptualizing, not computer programming…It is fundamental, not rote skill…It is a way humans, not computers, think…It complements and combines mathematical and engineering thinking…It is ideas, not artifacts,” and, “it is for everyone, everywhere” (Wing, 2006).  The third statement struck me as relevant for my point and I examined further.  Dr. Wing states that CT is a way humans solve problems, but not trying to get humans to think like computers.  She writes, “Computers are dull and boring; humans are clever and imaginative. Equipped with computing devices, we use our cleverness to tackle problems we would not dare take on before the age of computing.”  I understand her point that we need to be creative and use computers as tools to further our creative endeavors, but it’s difficult to discern how that fits with CT.  Her first point about CT being conceptualizing, not programming is helpful in this regard since she points out that “Thinking like a computer scientist means more than being able to program a computer. It requires thinking at multiple levels of abstraction.”  I can only surmise that that’s where the answer lies: in our ability to think abstractly.  It would allow for ambiguity and it would allow for open-ended questions and answers.  I’m still not sure how it fits with the formulaic part of CT or what it means for devising questions along the lines of CT.  In fact, on this latter issue, Dr. Wing wants us so invested in CT so heavily that we don’t even realize we do it. She writes with regard to her last point that, “Computational thinking will be a reality when it is so integral to human endeavors it disappears as an explicit philosophy.”  So essentially we should do it and not even think about it.  That seems like an awfully big ask.

With regard to CT and history, the answer is even less clear.  The Barr article gives an example that uses it to some degree, but it’s much more of a compare and contrast exercise (which uses some degree of CT) than it is a specific example of the use of CT.  Dr. Wing has another article from 2010 where she mentions “computational social science” but does not elaborate on what that course looks like (Wing, 2010).  She also references an as-yet unpublished manuscript she’s writing called, “Demystifying Computational Thinking for Non-Computer Scientists” and issues a call to other computer scientists, “Our immediate task ahead is to better explain to non-computer scientists what we mean by computational thinking and the benefits of being able to think computationally.”  Yes, this is what I need.  I think there is work to be done here. While I think I understand the process, in some ways it’s still fairly mystical to me – in its approach to asking questions, in its totality, and in its application to the social sciences.

Humans are difficult, by nature, to think about computationally.  When studying history, we study behaviors and actions and stories over hundreds or thousands of years.  Humans do not function like computers.  We do not always act logically. Love, hate, fear, confidence, selfishness, and altruism (just to name a few) are all contradictory parts of our nature and have played various roles in the history of human-kind.  How can CT account for that?  One of my favorite topics to teach my students is “prisoner’s dilemma.”  It essentially where a person acting selfishly gets the least-beneficial outcome for themselves AND the person they are competing against (see video below).  Why would a person choose a sub-optimal outcome for both themselves AND another person when a more optimal outcome is available to them?  Oddly, it IS logical.  And yet, if we are only concerned with the best outcome – solving the problem with the intent to get the optimal result, we would be choosing the “wrong” path.  History is full of sub-optimal outcomes that defy rationality.  People act for their own reasons at a given time, but they don’t know the results in advance.  As historians, we look back with 20-20 vision. We know what’s coming (for the group being studied) and we try to understand why things happened as they did.  It’s the ultimate exercise in open-ended questions. I don’t know, as of yet, how CT helps with that.  I don’t know how it helps my students with that.  I guess I’ll wait for the demystification article.


I started this post by saying I’d bookend it with Isaac Asimov and so I shall.  I said that I, Robot is, in a way, about computational thinking, and it is.  But in some ways, it’s a book about the problems with such an approach. Each chapter involves some sort of conflict with or alteration of the Three Laws of Robotics. The robots in these stories are bound (“hard wired,” if you will) to that basic algorithm and Asimov has a great deal of fun pointing out the potential problems when something must operate strictly using a set pattern of behavior.  It is the humans who must come up with the creative solutions to the problems created by their own inventions.  I still remember reading as child the second chapter of the book about the Speedy the robot.  Speedy was casually ordered to get some selenium from a pool on Mercury but failed to return. When they found Speedy, he was running in circles and acting as if drunk.  The problem was that Speedy was an expensive, experimental robot, so the 3d Law about self-preservation was strengthened. He couldn’t complete his task because of some dangerous gas near the pool which would probably destroy him (3rd law strengthened), but he also could not ignore the order he was given (2nd law, weekly ordered).  The ensuing conflict had him stuck in a loop which was only broken when one of the scientists on Mercury deliberately put himself in harm’s way and Speedy’s application of the First Law kicked in and he saved the human.  I think CT too may have it’s limits.  Of course, admittedly, I don’t entirely grasp all aspects of it, but I will keep asking questions – even when it’s supposed to be so internalized we don’t even think about it.  I’m sure there’s a creative solution.

Incidentally, the year after Asimov published I, Robot, he published another book called, Foundation.  It was the story of a mathematician who used advanced mathematics (CT?) and history to formulate a completely accurate predictive model of the future.  Combining math and history gave him the key to knowing the future.  It was called “psychohistory.”  Maybe that’s next.

Image result for asimov foundation


Asimov, Isaac (1950).  I, Robot.  London: Folio Society

Barr, D., Harrison, J., & Conery, L. (2011). “Computational thinking: A digital age skill for everyone.” Learning & Leading with Technology, 38(6), 20-23.

ISTE (2016). “ISTE Standards for Students 2016.” International Society for Technology in Education. Retrieved from http://www.iste.org/standards/standards/for-students-2016

Wing, Jeannette M. (2006). “Computational Thinking.” Viewpoint, Carnegie Mellon School of Computer Science.  Retrieved from https://www.cs.cmu.edu/~15110-s13/Wing06-ct.pdf

Wing, Jeannette M. (2010). “Computational Thinking: What and Why?”  Carnegie Mellon School of Computer Science.  Retrieved from http://www.cs.cmu.edu/~CompThink/resources/TheLinkWing.pdf