TUITION FREE 100% VIRTUAL HIGH SCHOOL FOR Students Ages 15-21  

PLEASE NOTE, ENROLLMENT FOR FALL 2023 IS NOW CLOSED

Key Points:

  • Educational technology is evolving to include artificial intelligence.
  • Artificial intelligence will bring “human-like” features and agency into future technologies.
  • Policy will have an important role in guiding the uses of artificial intelligence in education to realize benefits while limiting risks.

Educators, students, and parents and caregivers use technology daily and it has become essential to teaching and learning. Yet, familiarity with educational technology obscures a transformation occurring behind the scenes: almost all forms of technology used in education are beginning to incorporate artificial intelligence (AI) systems. More than half of school leaders already see the role of AI increasing in their school districts (Figure 1). Within five years, AI will change the capabilities of teaching and learning tools. This parallels with what is happening in our everyday lives; many people regularly use AI-enabled features like voice assistants, automatic mapping, and early warnings of potential credit card fraud. Now is the time to begin to understand implications, support effective use, and prepare policies that address future technologies for teaching and learning.

In this first of a series of six blog posts, we define AI in three ways, shifting from a view of AI as human-like toward a view of AI that keeps humans in the decision cycle.

Definition 1: Human-Iike capabilities to converse, reason, and act

“the theory and development of computer systems able to perform tasks
normally requiring human intelligence” [1]

Broad cultural awareness of AI may be traced to the landmark 1968 film “2001: A Space Odyssey” — in which a HAL 9000 computer converses with astronaut Frank. “HAL” helps Frank pilot the journey through space, a job that Frank could not do on his own. However, eventually Frank goes outside the spacecraft, Hal takes over control, and this does not end well for Frank. HAL exhibits human-like features like reasoning, talking, and acting. Like all applications of AI, HAL can both help humans, but also introduces unanticipated risks.

The idea of “human-like” is helpful because it can be a shorthand for the idea that computers now have capabilities that are very different from the capabilities of early educational technology applications. As is the case with HAL, educational applications will be able to converse with students and teachers, will be able to co-pilot how activities unfold in classrooms, and will be able to take actions that impact students and teachers broadly. There will be both opportunities to do things much better than we do today, and also risks that must be anticipated and addressed.

The “human-like” shorthand is not useful, however, because AI processes information differently from how people process information. When we do gloss over the differences between people and computers, we may frame policies for AI in education that miss the mark.

Definition 2: An agent pursuing a goal

“computing that acts independently towards a goal
based on inferences from theory or patterns in data”[2]

This second definition emphasizes agency: that AI systems will make choices that impact human teaching and learning. Correspondingly, humans must determine the types and degree of agency we will grant to technology within educational processes. This is not new: for decades, we’ve discussed the lines between the roles of teachers and computers. However, the discussion will intensify as technology becomes more powerful and ubiquitous.

Let’s start with a simple example. When a teacher says, “Display a map of ancient Greece on the classroom screen,” an AI agent may choose among hundreds of available maps by taking note of the lesson objectives, what has worked well in similar classrooms, or what maps have the features needed by students in the classroom. When an AI agent chooses an appropriate instructional resource, or provides a choice among a few options, teachers may save time. An independent agent may help teachers or students by handling some of the lesser goals that must be achieved in a lesson plan, allowing teachers and students to focus on more important goals. Although we may allow a computer agent to choose a map for us, there will be other forms of agency that we may reject, such as choosing individual readings for students before a discussion of a historical event.

This definition also brings up “theory and patterns in data” as the basis for how a computer reasons. Computers process theory and data in different ways from humans. AI depends on associations or relationships found in the specific data identified during the AI system development process. Although some associations may be useful, others may be biased or inappropriate. Finding bad associations in data is a major risk. Every parent, however, is familiar with the problem: a person or computer may say “Our data suggests your student should be placed in this class” and the parent may well argue “No, you are using the wrong data. I know my child better, and they should instead be placed in another class.” This problem is not only true of AI systems, but AI systems amplify the problem because when a computer uses data to make a recommendation, it can appear to be more objective and authoritative, even if it is not.[3]

Although this definition can be useful, it is also somewhat misleading. Our human view of agency, pursuing goals and reasoning includes our human abilities to make sense of contexts; AI does not yet understand context well and things break when the context slightly shifts . For this and other reasons, people will have a broader understanding of goals and must be involved in goal-setting, pattern-analysis and decision-making.

Definition 3: Intelligence augmentation

“Augmented intelligence is a design pattern for a human-centered partnership model of people and artificial intelligence (AI) working together to enhance cognitive performance, including learning, decision making and new experiences.”[4]

Whereas the first two definitions may appear to substitute computer-based reasoning for human reasoning, a long-standing perspective keeps humans in the loop and positions AI systems as a support for human reasoning. “Intelligence augmentation” (IA) centers “intelligence” and “decision-making” in humans but recognizes that people sometimes are overburdened and benefit from assistive tools. AI may help teachers make better decisions because computers notice patterns that teachers can miss due to time constraints and bandwidth issues. For example, when a teacher and student agree that the student needs reminders, an AI agent may provide reminders in whatever form a student likes without adding to the teacher’s workload. IA uses the same basic capabilities of AI, using associations in data to notice patterns and using automation to take actions based on those patterns. However, IA may be safer and more humane because people remain in charge and in the loop.

Implications for public policy

We can summarize the capabilities of AI systems as “automations based on associations.” By recognizing associations in data and initiating automated action, AI systems will bring capabilities to technologies to pursue goals, choose among different plans, and carry out sequences of actions. Teachers and students may find AI systems to be helpful — much as we find automated maps helpful when traveling, but risks will appear as well. The intelligence augmentation perspective can help by keeping humans involved in teaching and learning processes that involve setting goals, analyzing patterns, and making decisions. Nonetheless, risks will remain.

Policy can inform educators and the public so they are prepared for the changing nature of the technologies used in schools and wherever students learn. Policy may also activate the involvement of students, teachers, educational leaders, and other community members so that they can participate in designing and evaluating AI systems. In addition, policy may help the marketplace to function well, for example, by providing guidelines, requiring disclosures, or regulating aspects of how products use data or automate decisions. In future posts, we will elaborate on the themes of opportunities and risks of AI along with what policy and guidance documents can do to steer these capabilities toward societal and individual goals.