Artificial Intelligence – The 74 https://www.the74million.org America's Education News Source Fri, 26 May 2023 13:25:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://www.the74million.org/wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Artificial Intelligence – The 74 https://www.the74million.org 32 32 Opinion: AI Will Not Transform K-12 Education Without Changes to 'the Grammar of School' https://www.the74million.org/article/ai-will-not-transform-k-12-education-without-changes-to-the-grammar-of-school/ Tue, 30 May 2023 11:15:00 +0000 https://www.the74million.org/?post_type=article&p=709562 Call me a luddite, but I’m not convinced artificial intelligence will transform educational outcomes.  

This has nothing to do with the technology itself. It’s actually awe-inspiring to see how ChatGPT can provide instant feedback to students on their writing, deftly coach them in solving a complex math problem, and interact in ways that can easily be mistaken for a human tutor. It will only get better over time.

But it’s important to remember that promises of educational transformation were made about television in the 1970s, desktop computers in the 1980s and the internet in the 1990s. If “transformation” is defined as an era with entirely new levels of student outcomes, it is hard to say that any of these innovations delivered — fewer than 1 in 3 students still graduate high school ready for college or a career.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


What would make this time different is if systems leaders and policymakers recognize that the benefits of new technologies in K-12 education are inherently constrained by age-based cohorts, standardized curriculum and all the other hallmarks of what David Tyack and Larry Cuban famously called “the grammar of school.” 

That basic paradigm of schooling was designed over a century ago around a different core purpose: to educate some while winnowing out others. It’s akin to a timed, academic obstacle course where learning is structured based on a student’s age. Once a student falls behind, it can be hard to catch back up. 

When technology is applied within this industrial paradigm, schools can operate more efficiently. Electronic gradebooks, smartboards, digital assessments and now AI-generated lessons and student feedback can all make teaching a more sustainable profession. That’s an important end in itself — but it’s not one that will necessarily lead to transformative student outcomes.

What about personalization? Several organizations, including ours, have embedded aspects of AI to support more tailored approaches to learning. But the use of such technology can often conflict with the standardized methods of teaching that are core to the grammar of school.

A fifth-grade math teacher, for example, can use AI-generated lesson plans, quiz generators and grading tools to support teaching grade-level standards. But when student performance in that class spans at least seven grade levels, using AI to support fifth-grade standards supercharges the ranking and sorting that is core to the grammar of school.

A more consequential path would be to redesign math education so each teacher can meet all students where they are and help them accelerate as far as they can with a combination of individual and group work. That’s hard to do in a traditional classroom of 30 academically diverse kids, but AI makes it far more possible. The key barrier is not technology. It’s a century-old paradigm of schooling in which curriculum, teacher training, classroom workflow, assessments, accountability systems and regulations are all oriented around whole-class, age-based instruction. 

How can schools break free from this legacy and shift to student-centered learning? 

The most urgent need is for new and existing organizations to redesign the student experience in ways that take full advantage of AI’s capabilities.

Thousands of organizations are conducting research and development to reimagine how AI will fundamentally change the experience of consumers, passengers, patients, business leaders, employees, athletes and others. But few are doing the same when it comes to teachers and students. Districts are built to run schools, not to redesign them; universities are organized around scholarship and teacher development; and curriculum companies are largely focused on tools that fit within the current paradigm of schooling, which is where the demand is. Absent organizations designing new learning models that use AI and other technologies in ways that fundamentally rethink the student and teacher experience, the grammar of school will remain intact.

But stoking the supply of new learning models won’t be enough. School districts have spent decades grouping students by age, buying textbooks, training teachers on a uniform scope and sequence, and administering standardized tests based on students’ grade levels. Beginning to shift away from that can feel risky, if not impossible. But overcoming the forces of inertia is possible if local leaders and their communities develop and act upon a new vision for learning that is rooted in meeting each student’s unique strengths and needs.  

Finally, policymakers must create the conditions for student-centered learning to emerge. At the federal level, that begins by revamping the assessment and accountability provisions within the Elementary and Secondary Education Act so schools aren’t penalized for focusing on individual student needs. States also have a key role to play in encouraging schools and districts to embrace student-centered learning, as exemplified by initiatives like Greater Math in North Dakota.

AI has massive potential to dramatically impact children’s reading abilities, quantitative reasoning skills, understanding of history and the sciences, and more. But unless there’s a broader shift toward student-centered learning, the gap between what schools could be and what they are will only widen.

]]>
The Promise of Personalized Learning Never Delivered. Today’s AI Is Different https://www.the74million.org/article/the-promise-of-personalized-learning-never-delivered-todays-ai-is-different/ Thu, 04 May 2023 11:15:00 +0000 https://www.the74million.org/?post_type=article&p=708385 Over the last decade, educators and administrators have often encountered lofty promises of technology revolutionizing learning, only to experience disappointment when reality failed to meet expectations. It’s understandable, then, that educators might view the current excitement around artificial intelligence with a measure of caution: Is this another overhyped fad, or are we on the cusp of a genuine breakthrough?

A new generation of sophisticated systems has emerged in the last year, including Open AI’s GPT-4. These so-called large-language models employ neutral networks trained on massive data sets to generate text that is extremely human-like. By understanding context and analyzing patterns, they can produce relevant, coherent and creative responses to prompts. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Based on my experiences using several of these systems over the past year, I believe that society may be in the early stages of a transformative moment, similar to the introduction of the web browser and the smartphone. These nascent iterations have flaws and limitations, but they provide a glimpse into what might be possible on the very near horizon, where AI assistants liberate educators from mundane and tedious tasks, allowing them to spend more time with students. And this may very well usher in an era of individualized learning, empowering all students to realize their full potential and fostering a more equitable and effective educational experience.

There are four reasons why this generation of AI tools is likely to succeed where other technologies have failed:

  1. Smarter capabilities: These AI systems are now capable of passing many standardized tests, from high school to graduate- and professional-level exams that span mathematics, science, coding, history, law and literature. Google’s Med-PaLM performed at an “expert” doctor level on the medical licensing exam, not only correctly answering the questions but also providing a rationale for its responses. The rate of improvement with these systems is astonishing. For example, GPT-4 made significant progress in just four months, going from a failing grade on the bar exam to scoring in the 90th percentile. It scored in the 93rd percentile on the SAT reading and writing test and the 88th on the LSAT, and got a 5 — the top score — on several Advanced Placement exams.
  2. Reasoning engines: AI models like GPT-4, Microsoft’s Bing Chat, and Google’s Bard are advancing beyond simple knowledge repositories. They are developing into sophisticated reasoning engines that can contextualize, infer and deduce information in a manner strikingly similar to human thought. While traditional search engines functioned like librarians guiding users toward relevant resources, this new generation of AI tools acts as skilled graduate research assistants. They can be tasked with requests such as conducting literature reviews, analyzing data or text, synthesizing findings and generating content, stories and tailored lesson plans.
  3. Language is the interface: One of the remarkable aspects of these systems is their ability to interpret and respond to natural language commands, eliminating the need to navigate confusing menus or create complicated formulas. These systems also explain concepts in ways people can easily understand using metaphors and analogies that they can relate to. If an answer is too confusing, you can ask it to rephrase the response or provide more examples.
  4. Unprecedented scale: Innovations often catch on slowly, as start-ups must penetrate markets dominated by well-established companies. AI stands in stark contrast to this norm. With tech giants like Google, OpenAI and Microsoft leading the charge, the capabilities of large-language models are not only rapidly scaling, but becoming deeply integrated into a myriad of products, services and emerging companies.

These capabilities are finding their way into the classroom through early experiments providing a tantalizing sense of what might be possible.  

  • Tutoring assistants: The capability of these systems to understand and generate human-like text allows for providing individualized tutoring to students. They can offer explanations, guidance and real-time feedback tailored to each learner’s unique needs and interests. Khan Academy and DuoLingo are also piloting GPT-4 powered tutors that have been trained on their unique datasets.
  • Teaching assistants: Teachers spend hours on tedious administrative tasks, from lesson planning to searching for instructional resources, often at the cost of less time for teaching. As capable reasoning engines, AI can assist teachers by automating many of these tasks — including quickly generating lesson plan ideas, developing worksheets, drafting quizzes and translating content for English learners. 
  • Student assistants: AI-based feedback systems have the capacity to offer constructive critiques on student writing, including feedback aligned to different assessments, which helps students elevate the quality of their work and fine-tune their writing skills. It also provides immediate help when students are stuck on a concept or project.

While these technologies are enormously promising, it is also important to recognize that they have limitations. They still struggle with some math calculations and at times offer inaccurate information. Rather than supplanting teachers’ expertise and judgment, they should be utilized as a supportive co-pilot, enhancing the overall educational experience. Many of these limitations are being addressed through integrations with other services, such as Wolfram for dramatically better math capabilities. Put another way, this is the worst these AI technologies will be. Whatever shortcomings they have now will likely be improved in future releases.

The unprecedented scale and rapid adoption of generative AI mean that these benefits are not distant possibilities, but realities within reach for students and educators worldwide. By harnessing the power of AI, it is possible to create a future where teaching and learning are not only more effective and equitable, but also deeply personalized, with students empowered to reach their full potential and teachers freed to focus on teaching and fostering meaningful connections with their students.

]]>
Opinion: ‘This Changes Everything’: AI Is About to Upend Teaching and Learning https://www.the74million.org/article/this-changes-everything-ai-is-about-to-upend-teaching-and-learning/ Thu, 27 Apr 2023 13:30:00 +0000 https://www.the74million.org/?post_type=article&p=708030 In April 2022, I attended the ASU-GSV Summit, an ed tech conference in San Diego. I’d recently become an official Arizona State University employee, and as I was grabbing coffee, I saw my new boss, university President Michael Crow, speaking on a panel being broadcast on a big screen. At the end of the discussion, the moderator asked Crow what we’d be talking about at the 2030 summit. In his response, Crow referenced a science fiction book by Neil Stephenson, A Young Lady’s Illustrated Primer. I was intrigued.  

I’ve since read the book (which is weird but fascinating). The protagonist is a girl named Nell who is a pauper and victim of abuse in a dystopian world. By a stroke of luck, Nell comes to own a device that combines artificial intelligence and real human interaction to teach her all she needs to know to survive and develop a high level of intellectual capacity. The device adjusts the lessons to Nell’s moods and unique needs. Over time, she develops an exceptional vocabulary, critical physical skills (including self-defense) and a knowledge base on par with that of societal elites – which enables her to transcend the misery of her life.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Crow told the conference crowd last year: In 2030, we will have tools like this. In fact, he said, ASU and engineers elsewhere are developing them now. But if we reconvene in 2030 without figuring out how we get those kinds of tools to kids like Nell, we will have failed.

The recent and rapid advances in artificial intelligence have been on my radar for some time, but I came home from last week’s 2023 ASU-GSV conference even more certain that advances in AI via models such as GPT-4 (the latest iteration of ChatGPT) and Bing will soon be used as radically personalized learning tools like Nell’s primer. That future seemed far off in 2022 — but these tools are developing so fast, they’re not just here now; in a matter of weeks or months, they’re going to be your kid’s tutor, your teacher’s assistant and your family’s homework helper.

I attended several conference panels on AI, and one specifically on Khan Academy’s new tutoring program, Khanmigo, which is powered by GPT-4, blew me away. As Sal Khan said, he realized the power of this generation of AI: “This changes everything.” Of course, attendees discussed the safety and security risks and threats of using AI in the classroom. But what struck me was the potential for these sophisticated tools that harness the intelligence of the internet to radically personalize educational content and its delivery to each child. Educators can radically equalize education opportunities if they figure out how to ride this technological revolution effectively.

Khanmigo can do extraordinary tasks. For example, it writes with students, not for them. It gives sophisticated prompts to encourage students to think more deeply about what they’re reading or encountering, and to explain their thinking. It will soon be able to remember students’ individual histories and customize lessons and assessments to their needs and preferences. And that’s just the start. Khan described how one student reading The Great Gatsby conversed in real time with an AI version of Jay Gatsby himself to discuss the book’s imagery and symbolism. Khan said his own daughter invented a character for a story and then asked to speak to her own character — through Khanmigo — to further develop the plot.

Khanmigo — and likely other competing tools to come — also have the potential to revolutionize teaching. Right now, a teacher can use AI to develop a lesson plan, create an assessment customized to each student’s background or interests, and facilitate breakout sessions. This portends a massive shift in the teaching landscape for both K-12 and higher education — and likely even workforce training. By one account, the use of AI in colleges and universities is “beyond the point of no return.” A professor from Wharton School of Business at the conference said he actually requires his students to use AI to write their papers, but they must turn them in with a “use guide” that demonstrates how they utilized the tool and cited it appropriately. He warns students that AI will lie and that they are responsible for ensuring accuracy. The professor said he no longer accepts papers that are “less than good” because, with the aid of AI, standards are now higher for everyone. 

All this feels like science fiction becoming reality, but it is just the start. You have probably heard about how GPT-4 has made shocking advances compared to the previous generation of AI. Watch how it performs on the AP Bio or the bar exam. Watch how it performs nearly all duties of an executive assistant. Watch how it writes original and pretty good poetry or essays. Kids are indeed using this tool to write their final papers this year. But the pace of development is so rapid that one panelist predicted that in a year, AI will be making its own scientific discoveries — without direction from a human scientist. The implications for the types of jobs that will disappear and emerge because of these developments are difficult to predict, but rapid change and disruption will almost certainly be the new normal. This is just the beginning. Buckle your seat belts.

To be sure, the risks are real. Questions about student privacy, safety and security are serious. Preventing plagiarism, which is virtually undetectable with GPT-4, is on every teacher’s mind. Khan is currently working with school districts to set up guardrails and help students, teachers and parents navigate these very real concerns. But a common response — to shut down or forbid the use of AI in schools — is as shortsighted and fruitless as trying to stop an avalanche by building a snowbank. This technology is unstoppable. Educators and district, state and federal leaders need to start planning now for how to maximize the opportunities for students and families and educators while minimizing the risks.

A host of policy and research questions need to be explored: What kind of guardrails are available and which are most effective? Which tools and pedagogical approaches best accelerate learning? In what ways can AI support innovations that truly move the needle for teaching and learning? Education policy leaders, ed tech developers and researchers must begin to address these issues. Quickly.

I believe AI can make the teaching profession much more effective and sustainable. It can also put an end to the ridiculous notion that one teacher must be wholly responsible for addressing every student’s learning level and individual needs. AI — combined with novel staffing models like team teaching and specialized roles being piloted in districts like Mesa, Arizona, by my colleagues at ASU — could finally allow teachers to start working in subjects they’re most suited to. Instead of fretting about the lack of high-dosage, daily tutoring, which is the best way to address learning gaps, districts and families could see an army of AI tutors available for all students across the U.S. Parents who have been frustrated with the lack of attention to their children’s needs could set up an AI tutor at home.

But to go back to Michael Crow’s message: If technology and education leaders develop these tools but do not ensure they reach the students most in need, they will have failed. The field must begin to 1) track what is happening in schools and living rooms across the country around AI and learning; 2) build a policy infrastructure and research agenda to develop and enforce safeguards and move knowledge in real time; and 3) dream big about realizing a future of learning with the aid of AI.

As CRPE’s 25th anniversary essay series predicted in 2018, there are many things those planning for the future of education cannot know with the rise of AI: the effect of rapid climate change, natural disasters and migrations; shifting geopolitical forces; fast-rising inequalities; and racial injustices. It is clear, however, that education must change to adapt to these new realities. This must happen quickly and well if educators are to adeptly combine the positive forces of AI with powers that only the human mind possesses. To make this shift, schools will need help to transition to a more nimble and resilient system of learning pathways for students. CRPE has been writing about this transition for five years, and we are now launching a series of research studies, grant investments and convenings that bring together educators with technology developers to help navigate the path forward. 

I hope that when people reconvene at ASU-GSV in 2030, AI will have been utilized so effectively to reimagine education that attendees can say they have radically customized learning for all kids like Nell. Despite the risks, using AI in classrooms could help eliminate poverty, reinvigorate the global economy, stem climate change and, potentially, help us humans co-exist more peacefully. The time is now to envision the future and begin taking steps to get there.

]]>
Opinion: ChatGPT Is Here to Stay. Testing & Curriculum Must Adapt for Students to Succeed https://www.the74million.org/article/chatgpt-is-here-to-stay-testing-curriculum-must-adapt-for-students-to-succeed/ Mon, 17 Apr 2023 11:15:00 +0000 https://www.the74million.org/?post_type=article&p=707465 As a former teacher, I have seen the power of technology to enhance and transform the way educators teach and learn. From interactive whiteboards to educational apps, technology has the potential to revolutionize education and better prepare students for the future. That’s why the decision by some school districts to ban ChatGPT — which generates human-like responses to complex questions using artificial intelligence — is deeply concerning. It risks widening the gap between those who can harness the power of this technology and those who cannot, ultimately harming students’ education and career prospects.

In a recent essay titled “The Age of AI Has Begun,” Bill Gates identified the technology behind ChatGPT as one of the two most groundbreaking he has witnessed in his lifetime. Gates believes it will fundamentally reorient entire industries. Researchers at Open AI, the company that created ChatGPT, estimate the technology has the potential to disrupt 19% of U.S. jobs and that four-fifths of American workers could see their jobs affected by chatbots in some way. Among the most vulnerable: translators, writers, public relations representatives, accountants, mathematicians, blockchain engineers and journalists.

Already, effective use of ChatGPT is becoming a highly valued skill, impacting workforce demands. A San Francisco-based company is offering salaries of up to $335,000 for engineers skilled in writing prompts — the questions that generate complex responses using this technology. A Japanese company is testing new hires on their ChatGPT proficiency and requiring them to apply it in their work. McKinsey & Company has estimated that between 400 million and 800 million jobs could be lost to automated technology by 2030 — and that was in 2017, before ChatGPT came on the scene.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Employees are perceiving a significant shift in their workplaces, and many want training on how to effectively use AI tools such as ChatGPT to perform their jobs. This growing demand for these skills underscores the need for schools to prepare students — especially those in high school — to meet these evolving demands.

That’s why banning ChatGPT is a mistake. It would be like prohibiting students from learning how to use laptops and calculators. To fully utilize ChatGPT’s capabilities, users must create thoughtful prompts, review the output, refine their requests, provide feedback to the chatbot and then have it integrate their ideas to produce the desired insight or product. Students must employ essential skills such as reason, logic, writing, reading comprehension, critical thinking, creativity and subject knowledge across various topics to engage a generative AI technology effectively. They must also learn to recognize its limitations and propensities for error.

Banning ChatGPT in classrooms risks creating a division between students who learn how to utilize its capabilities and those who are left behind.

Preparing students for the demands of the 21st century will take a comprehensive approach. To achieve this, the federal government can require high schools to assess AI proficiency within their existing English Language Arts and math exams. This approach can motivate states to redesign their K-12 curricular standards, which influence what students learn daily. State agencies must lead the way in integrating generative AI technologies into their K-12 standards, investing in educator training and developing effective curriculum materials. Washington should incentivize and fund these efforts.

Businesses must recognize the importance of preparing their future workforce and encourage state education officials to incorporate technologies like ChatGPT into learning standards. Philanthropic organizations can partner with school districts to create pilot programs demonstrating successful AI tool integration, inspiring state agencies to prioritize and fund this work.

Advocacy is also crucial to the success of these efforts. Parents must urge their children’s schools to teach AI technology, and teachers should insist on adequate training to become proficient in them. Collaboration among educators and families is essential for students to acquire the necessary tools and skills to thrive in an AI-driven world.

Whether schools embrace it or not, generative AI technology will transform how students access information and learn. Other countries are paying attention. Singapore is already introducing AI-driven support systems for students and teachers. The United Arab Emirates aims to provide AI training to one-third of its annual STEM graduates, and the United Kingdom — in its effort to become a leading global AI superpower — has set a goal of producing 1,000 AI-focused Ph.D.s over five years.

In this new, AI-driven world, success will belong to those who possess the skills to navigate it effectively. To equip students for an ever-changing technological landscape, K-12 and higher education must adopt generative AI technologies like ChatGPT. In doing so, they can foster a well-educated and skilled workforce, encourage innovation and build a brighter future for everyone.

]]>
NM district turns to gun-detection AI in effort to prevent school shootings https://www.the74million.org/article/nm-district-turns-to-gun-detection-ai-in-effort-to-prevent-school-shootings/ Fri, 31 Mar 2023 14:30:00 +0000 https://www.the74million.org/?post_type=article&p=706838 This article was originally published in Source New Mexico.

Clovis Municipal School District recently began using artificial intelligence technology designed to detect guns and potential shooters on school campuses. The software can even alert law enforcement before a single shot is fired.

The AI technology is designed by ZeroEyes, a Philadelphia-based company founded by a group of former Navy SEALs. The company’s software installs in line with the district’s existing camera systems and operates in the background, constantly analyzing every frame of video as it searches for signs of a firearm. If a gun is detected, the software sends still images to a human who will determine if the gun is real and if lives are in danger.

That human review of a perceived gun is often completed within five seconds, ZeroEyes’ co-founder and Chief Revenue Officer Sam Alaimo said. And if the gun is determined to be real, district officials can be notified within seconds through a number of methods, and ZeroEyes even has the capability to contact law enforcement directly through RapidSOS, a platform that sends data to 911 call centers.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Along with fast notification, the software can also relay the exact location of the potential shooter to police to help officers locate and stop a gunman.

“Schools are complex. Kids that go there often don’t know their way around them. If you call law enforcement and say, ‘There’s a shooting in my school,’ where are (first responders) going to show up?” Alaimo said. “ If we can get them to where the shooter is, they can stop the killing as soon as humanly possible and then take care of anybody who may have been injured in the process. Every second in those situations counts. Every second literally could mean a life.”

Clovis Municipal Schools has signed a four-year, $345,000 agreement for a subscription to the ZeroEyes software and its monitoring services, Loran Hill, the district’s senior director of operations said. The district has funded the technology with money from the coronavirus aid bill, a $2.2 trillion federal pandemic recovery package.

Hill isn’t aware of any shooting in the history of Clovis schools but said the district was looking for ways to prevent one from ever happening. The district took proposals from several companies with detection and prevention technologies, and ultimately, ZeroEyes was selected by a review committee.

District officials were drawn to ZeroEyes for its ability to potentially prevent tragedy, and because of its human-staffed operation centers. These facilities, which ZeroEyes calls ZOCs — are staffed by former military and law enforcement members who await notification of potential threats. When the AI detects a possible gun, the software takes a screenshot of the video and outlines the perceived gun with a brightly colored box to aid the human that will review the image in finding the possible gun.

Within three to five seconds, a human in the ZOC determines if there’s a threat or not. If law enforcement is contacted, those in the ZOC are able to relay directly to local law enforcement what type of firearm the person is armed with. The company currently has two ZOCs, one near Philadelphia, and one in Hawaii.

ZeroEyes also maintains a green screen lab at its Philadelphia headquarters where any type of scenario can be created to mimic any physical environment, from a classroom to a school hallway, and any location outdoors. The software is also trained to recognize a variety of stock and modified firearms from the smallest pistols to the longest rifles or shotguns.

The software constantly searches for any sign of a firearm, from a gun being pulled out of a backpack to a gun being pointed at someone. In one instance, Alaimo said someone was wearing a T-shirt with an Uzi submachine gun printed on it and the software detected the gun as a possible threat. A still image of the shirt was sent to an operation center for analysis where a human was able to determine that no real gun was in the image.

Since the software works with a school district’s existing camera systems, it will search for a gun anywhere within view of the camera systems. And the software works with the same certainty as having a human watching for guns on campus.

“If a gun’s in front of that camera, it’ll pick it up,” Alaimo said. “If the human eye can tell it’s a gun, the camera will tell it’s a gun. We learned this early on: You can’t just train an algorithm to detect a gun being held as if it’s about to shoot somebody. You have to be able to detect a gun in any circumstance to make sure you never miss a true positive.”

Hill with Clovis Municipal Schools said one thing that drew district officials to ZeroEyes was how the software can detect things that a human might miss.

“Artificial intelligence is able to look all day, every day,” he said. “We appreciate our (student resource) officers, but ZeroEyes is going to cover much more ground than an officer is able to.”

As with the use of most technology in schools, some may be concerned about student privacy. Alaimo said ZeroEyes only receives images from cameras placed in schools when the software detects a possible threat, and the company does not have the ability to access a live feed of any given school district’s cameras.

“We’re very stringent with our data protection privacy rights,” Alaimo said. “We cannot recognize faces, we can’t store biometric data and we don’t want to do those things. It’s literally just: Is there a gun, yes or no? That is our primary focus.”

Racial bias is also a concern when it comes to determining whether someone might commit a crime — like carrying out a mass shooting. And while software may not hold any bias while making a threat determination, a human who reviews the images sent by the software might. Alaimo understands those concerns, but he said those in the command center that review the images are solely focused on the gun.

“The algorithm makes it very apparent — the situation makes it apparent,” he said. “Is it a gun or not a gun? Race does not come into it.”

Hill said the district researched ZeroEyes prior to adopting its technology and district officials feel confident that student’s privacy is protected because the company isn’t monitoring or keeping the district’s data. He said he’s confident that race will not be a determining factor in whether law enforcement is called or not because in the demonstrations of the software the district has seen. And because the process of determining a threat happens so quickly, the district doesn’t feel there’s much time for the human reviewer to determine the race of the potential shooter.

Founded in 2018, ZeroEyes is currently in use in 30 states, and Alaimo estimates ZeroEyes will be in use in all 50 states by the end of the year. The company’s AI technology is also used in Mexico and England. And while the technology can be found in locations like casinos and shopping malls, Alaimo said it was developed for use in schools with the goal of keeping kids safe from mass shootings.

“What often happens in these circumstances is that people talk about mental health, and then they argue about gun laws, and then everybody offers thoughts and prayers,” he said. “We wanted to build something that could work right now. We wanted a solution that could actually make a dent right now, and save the lives of kids.”

Source New Mexico is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Source New Mexico maintains editorial independence. Contact Editor Shaun Griswold for questions: info@sourcenm.com. Follow Source New Mexico on Facebook and Twitter.

]]>
Opinion: Virtual Reality & Other New Technologies Pose Risks for Kids. It’s Time to Act https://www.the74million.org/article/virtual-reality-other-new-technologies-pose-risks-for-kids-its-time-to-act/ Mon, 27 Mar 2023 13:30:00 +0000 https://www.the74million.org/?post_type=article&p=706497 Almost immediately after ChatGPT, a captivating artificial intelligence-powered chatbot, was released late last year, school districts across the country moved to limit or block access to it. As rationale, they cited a combination of potential negative impacts on student learning and concerns about plagiarism, privacy and content accuracy. 

These districts’ reactions to ChatGPT have led to a debate among policymakers and parents, teachers and technologists about the utility of this new chatbot. This deliberation magnifies a troubling truth: Superintendents, principals and teachers are making decisions about the adoption of emerging technology without the answers to fundamental questions about the benefits and risks. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Technology has the potential to modernize education and help prepare students for an increasingly complex future. But the risks to children are just beginning to be uncovered. Creating a policy and regulatory framework focused on building a deeper understanding of the benefits and risks of emerging technologies, and protecting children where the evidence is incomplete, is not alarmist, but a responsible course of action. 

Why act now? 

First, recent history has demonstrated that emerging technology can pose real risks to children. Evidence suggests a correlation between time spent on social media and adolescent anxiety, depression, self-harm and suicide. These impacts seem particularly significant for GenZ teenage girls. While there is debate among researchers about the size of these effects, the state of adolescent mental health has deteriorated to the extent that it was declared a national emergency in 2021 by the American Academy of Pediatrics, the American Academy of Child and Adolescent Psychiatry, and the Children’s Hospital Association. Social media seems to be a contributing factor. 

Second, immersive technologies, including virtual reality, augmented reality, mixed reality and brain-computer interfaces, may intensify the benefits and risks to children. Immersive technologies have the potential to fundamentally remake teaching and learning. But the impact on childhood development of exposure to multisensory experiences replicating the physical world in digital spaces is just beginning to be understood — and there is cause for concern based on limited research. For example, a 2021 study concluded that immersive virtual reality can interfere with the development of coordination that allows children to maintain balance. And a 2021 review of 85 studies on the impact of virtual reality on children revealed evidence of cognition issues, difficulty navigating real and virtual worlds, and addiction. The most significant risk may be how frequent and prolonged exposure to virtual environments impact mental health. 

Third, the digital divide has narrowed considerably. Government and the private sector have driven improvements in internet access at schools, expanded cellular networks and made mobile and computing devices significantly more affordable. Since 2014-15, the percentage of teens who have a smartphone has increased from 73% to 95%. Paired with money from COVID-19 legislation that allowed schools to invest in hardware, more children will have opportunities to use emerging technologies than ever had access to older innovations — including apps and the internet — at home and in school. 

Based on emerging evidence on these impacts on children, and in the face of significant unknowns, a policy and regulatory framework focused on mitigating risks — while still allowing children to access the benefits of these technologies — is warranted. At the federal level, Congress should consider:

  • Compelling all emerging technology companies, including those producing immersive reality products that are utilized by children, to provide academic researchers access to their data.
  • Compelling all immersive reality companies to assess the privacy and protection of children in the design of any product or service that they offer.
  • Compelling all immersive reality companies to provide child development training to staff working on products intended for use by children.
  • Requiring hardware manufacturers of virtual reality, augmented reality, mixed reality and brain-computer interface devices targeted to children to prominently display on their packaging warning labels about unknown physical and mental health risks.
  • Establishing guidance, via the Department of Education, for district and school leaders to prepare their communities for the adoption of immersive technologies.
  • Requiring all immersive technology companies to inform users of product placement within the platform.
  • Compelling relevant federal regulatory agencies to provide clarification on the ways existing laws, such as the Health Information Portability and Accountability Act and the Children’s Online Privacy Protection Act, Individuals with Disabilities Act and Americans with Disabilities Act, apply to immersive technologies.
  • Compelling all immersive technology companies to acquire parental consent for data sharing, particularly biometric information, including eye scans, fingerprints, handprints, face geometry and voiceprints.
  • Providing guidelines around minimum age for the use of immersive technology platforms and products.

At the state level, every governor should carefully assess the action Utah took last week to regulate children’s use of social media and consider the following actions: 

  • Creating child well-being requirements for state procurement of any immersive technology.
  • Offering research and development grants to in-state immersive technology companies to focus on safety and well-being impacts on children.
  • Establishing protocols for reviewing districts’ use of emerging technologies to determine compliance with federal and state law.

Finally, at the local level, school boards, superintendents and school leaders should consider regulations and guidance for the selection, adoption and use of immersive technologies:

  • Assessing opportunities for integration with current teaching and learning methods and curriculum.
  • Investing in and planning for professional development around these technologies.
  • Ensuring accessibility for students with disabilities and English learners when planning around use of emerging technologies.
  • Ensuring that any planned use of emerging technologies in the classroom is compliant with state and federal special education laws.
  • Evaluating the costs of immersive technology procurement and necessary infrastructure upgrades and making the results transparent to the community.
  • Creating opportunities for educator, parent and student involvement in the purchasing process for technology.

If emerging technology can have detrimental impacts on children — and evidence points to that being the case — responsibly mitigating the risks associated with these technologies is prudent. Why chance it? This is the best opportunity to allow children to reap the benefits.

]]>
ChatGPT Scores a C+ At the University of Minnesota Law School. Now What? https://www.the74million.org/article/as-openais-chatgpt-scores-a-c-at-a-respectable-law-school-educators-wonder-whats-next/ Tue, 07 Feb 2023 20:28:00 +0000 https://www.the74million.org/?post_type=article&p=703770 Though computer scientists have been using chatbots to simulate human thinking for more than 70 years, 2023 is fast becoming the year in which educators are realizing what artificial intelligence means for their work.

Over the past several weeks, they’ve been putting OpenAI’s ChatGPT through its paces on any number of professional-grade exams in law, medicine, and business, among others. The moves seem a natural development just weeks after the groundbreaking, free (for now) chatbot appeared. Now that nearly anyone can play with it, they’re testing how it performs in the real world — and figuring out what that might mean for both teaching skills like writing and critical thinking in K-12, and training young white-collar professionals at the college level. 

Most recently, four legal scholars at the University of Minnesota Law School tested it on 95 multiple choice and 12 essay questions from four courses. It passed, though not exactly at the top of its class. The chatbot scraped by with a “low but passing grade” in all four courses, a C+ student.

But don’t get complacent, warned Daniel Schwarcz, a UM professor and one of the study’s authors. The AI earned that C+ “relative to incredibly motivated, incredibly talented students … and it was holding its own.”

Think of it this way, Schwarcz said: Plenty of C+ students at the university go on to graduate and pass the bar exam.

Daniel Schwarcz

ChatGPT debuted less than three months ago, and its respectable performance on several of these tests is forcing educators to quickly rethink how they evaluate students — assigning generic written essays, for instance, now seems like an invitation for fraud. 

But it’s also, at a more basic level, forcing educators to reconsider how to help students see the value of learning to think through the material for themselves. 

Before he encountered ChatGPT, Schwarcz typically gave open-book exams. What the new technology is making him think more deeply about is whether he was often testing memorization, not thinking. “If that’s the case, I’ve written a bad exam,” he said.

And like Schwarcz, many educators now warn: With improving technology, today’s middling chatbot is tomorrow’s Turing valedictorian.

“If this kind of tool is producing a C+ answer in early 2023,” said Andrew M. Perlman, dean of Suffolk Law School in Boston, “what’s it going to be able to do in 2026?”

Fake studies and ‘human error’

Lawyers aren’t the only professionals in the chatbot’s crosshairs: In January, Christian Terwiesch, a business professor at the University of Pennsylvania’s Wharton School, let it loose on the final exam of Operations Management, a “typical MBA core course” at the nation’s pre-eminent business school. 

While the AI made several “surprising” math mistakes, Terwiesch wrote in the study’s summary, it impressed him with its ability to analyze case studies, among other tasks. “Not only are the answers correct, but the explanations are excellent,” he wrote.

Its final grade: B to B-.

A Wharton colleague, Ethan Mollick, in December told NPR that he got the chatbot to write a syllabus for a new course, as well as part of a lecture. And it generated a final assignment with a grading rubric. But its tendency to occasionally deliver erroneous answers from its wide-ranging web searches, Mollick said, makes it more like an “omniscient, eager-to-please intern who sometimes lies to you.”

Indeed, AI tools often create problems of their own. In January, Jeremy Faust, an emergency medicine physician at Brigham and Women’s Hospital in Boston, asked ChatGPT to diagnose a 35-year-old woman with chest pains. The patient, he specified, takes birth control pills but has no past medical history.

After a few rounds of back-and-forth, the bot, which Faust cheekily referred to as “Dr. OpenAI,” said she was probably suffering from a pulmonary embolism. When Faust suggested it could also be costochondritis, a painful inflammation of the cartilage that connects rib to breastbone, ChatGPT countered that its diagnosis was supported by research, specifically a 2007 study in the European Journal of Internal Medicine.

Then it offered a citation for a paper that does not exist. 

@medpagetoday The AI platform has great potential for use in medicine, but has huge pitfalls, says Jeremy Faust, MD #openai #chatgpt #medtech #medicaltechnology #ai #artificialintelligence #medpagetok #medicalnews ♬ original sound – MedPage Today | Medical News

While the journal is real — and a few of the researchers cited have published in it — the bot created the citation out of thin air, Faust wrote. “I’m a little miffed that rather than admit its mistake, Dr. OpenAI stood its ground, and up and confabulated a research paper.”

Confronted with its lie, the AI “said that I must be mistaken,” Faust wrote. “I began to feel like I was Dave in “2001,” and that the computer was HAL-9000, blaming our disagreement on ‘human error.’”

Faust closed his computer.

A scene from “2001: A Space Odyssey,” in which a computer commandeers a space voyage. A Boston emergency room physician who watched recently as a modern AI created a fake medical study to support its diagnosis, said he felt like the astronauts in the movie.  (Transcendental Graphics/Getty Images)

‘Proof of original work’

Such bugs haven’t stopped educators from test-driving these tools for students and, in a few cases, for professionals.

Last December, just days after Open AI released ChatGPT, Perlman, the Suffolk dean, presented it with a series of legal prompts. “I was interested in just pushing it to its limits,” he said.

Perlman transcribed its mostly respectable replies and co-authored a 16-page paper with the chatbot.

Andrew M. Perlman

Peter Gault, founder of the AI literacy nonprofit Quill.org, which offers a free AI tool designed to help improve student writing, said that even if teachers think things are moving fast this winter, the reality is that they are moving even faster than they seem. Case in point: An online “prompt engineering” channel on the social platform Discord, devoted to helping students improve their ChatGPT requests for better, more accurate results, now has about 600,000 users, he said. “There are tens of thousands of students just swapping tips for how to cheat in it,” he said.

Gault’s nonprofit, along with CommonLit.org, has already debuted another free tool that helps educators sniff out the more formulaic writing that AI typically generates. 

While other educators have suggested that future ChatGPT versions could feature a kind of digital watermarking that identifies cut-and-pasted AI text, Gault said that would be easy to circumvent with software that basically launders the text and removes the watermark. He suggested that educators begin thinking now about how they can use tools like Google Docs’ version history to reveal what he calls “proof of original work.”

Peter Gault, founder of Quill.org, talks to students. Gault’s nonprofit uses AI to help students improve their writing. (Courtesy of Peter Gault)

The idea is that educators can see all the writing and revising that go into student essays as they take shape. The typical student, he said, spends nine to 15 hours on a major essay. Google Docs and other tools like it can show that progression. Alternatively, if a student copies and pastes an essay or section from a tool like ChatGPT, he said, the software reveals that the student spent just moments on it.

“We have these tools that can do the thinking for us,” Gault said. “But as the tools get more sophisticated, we just really risk that students are no longer really investing in building intellectual skills. It’s a difficult problem to solve. But I do think it’s worth solving.”

‘Resistance is futile’

Minnesota’s Schwarcz flatly said law schools must train students on tools like ChatGPT and its successors. These tools “are not going away — they’re just going to get better,” he said. “And so in my mind, ultimately as educators, the fundamental thing is to figure out how to train students to use these tools both ethically and effectively.”

Perlman also foresees law schools using tools like ChatGPT and whatever comes next to train lawyers, helping them generate first drafts of legal documents, among other products, as they learn their trade.

In the end, AI could streamline lawyering, allowing attorneys to spend more time practicing “at the top of their license,” Perlman said, engaging in more sophisticated legal work for clients. This, he said, is the part of the job lawyers find most enjoyable — and clients find most valuable.

Eamonn Fitzmaurice/The 74

It could also make such services more affordable and thus more available, Perlman said. So even as educators focus on the technology’s threat, “I think we are quickly going to have to pivot and think about how we teach students to use these tools to enable them to deliver their services better, faster and cheaper in the future.”Perlman joked that the best way to think about the future of AI in the legal profession is to remember that old “Star Trek” maxim: “ ‘Resistance is futile.’ This technology is coming, and I think we ignore it at our peril — and we try to resist at our peril.”

]]>
ChatGPT: Learning Tool — or Threat? How a Texas College Is Eyeing New AI Program https://www.the74million.org/article/new-artificial-intelligence-program-raises-concerns-at-this-texas-university/ Mon, 30 Jan 2023 17:30:00 +0000 https://www.the74million.org/?post_type=article&p=703234 This article was originally published in El Paso Matters.

ChatGPT has been in the headlines for months.  At the University of Texas at El Paso, professors and students are not sure if it is a tool or a threat – or both.

Since its launch in November, the artificial intelligence program has generated concerns over its ability to produce essays, research papers and other written material that appear natural sounding based on someone’s prompts and how it could affect higher education. Instructors appreciate ChatGPT’s abilities, but are leery of how students could misuse the program’s work and submit it as their own.

Those who have tried the free instrument praise its ability to prepare straight-forward responses that are error free in terms of spelling, grammar and punctuation. However, they also noted that the writings often lack higher order thinking and sometimes provided factually incorrect information.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Greg Beam, associate professor of practice in the Department of Communication, said he plans to use it in his introduction to the Art of the Motion Picture course this spring. He called ChatGPT’s responses to his prompts “mechanically immaculate,” but bland in word choice, and lacking context and insights.

Greg Beam lectures in his Introduction to the Art of the Motion Picture class at UTEP on Monday, Jan. 23. Beam plans to integrate assignments using ChatGPT into his course this semester. (Corrie Boudreaux/El Paso Matters)

A UTEP instructor for more than five years, Beam characterized the program as an academic tool that could be abused so he and other educators will need to explain and demonstrate its proper use. He plans to let students use it to augment course instruction and brainstorm ideas. Additionally, he may assign the program’s writings to students as a critiquing exercise.

“Rather than allowing it to be this forbidden fruit that’s hanging out there that they’re told not to take a bite of, I’m going to say here’s how to use it responsibly because I think it could actually be a very useful resource,” Beam said.

Andrew Fleck associate professor of English and president of UTEP’s Faculty Senate

Andrew Fleck, associate professor of English and president of the university’s Faculty Senate, is more cautious. He does not plan to use ChatGPT in his spring classes. Instead, he has asked the Faculty Senate’s academic policy committee to review the university’s statement of academic integrity, which should be in every course syllabus, to determine if it needs to be updated regarding students’ reliance on artificial intelligence to produce their work.

UTEP officials did not respond to a request for comment on any steps the university planned to take regarding ChatGPT.

Fleck, a higher education faculty member for 30 years, recalled how colleagues raised similar concerns as internet search engines became popular in the 1990s. He said some students used technology to cheat, while faculty used it to catch offenders. Since ChatGPT started, other programs have popped up with claims that they can detect AI-generated writings.

“I’ll be curious how it kind of plays itself out in the next year or so,” Fleck said. “It certainly does pose certain kinds of risks, but I guess the question is how effective will ChatGPT be eventually in replicating human thought and human communication.”

UTEP Provost John Wiebe said advances in the accessibility of artificial intelligence (AI) have triggered faculty conversations at higher education institutions around the world to include UTEP. He said that after consultations with Faculty Senate leaders about the opportunities and challenges that faculty and students face because of ChatGPT, several faculty committees will work on the topic.

“AI is a tool that can be used to enhance learning, but can also be used in ways that violate UTEP’s Academic Dishonesty policy,” Wiebe said. “We will work to help faculty understand the issues and how their colleagues in other places are responding.”

Deki Peltshog, a sophomore computer science major, said she learned about the new artificial intelligence program through friends and social media, and used it during the winter break. ChatGPT amazed and amused her with its ability to respond to her requests for a song about cats and a poem about eating pizza at night.

The Bhutan native also tested the program’s grasp of languages. ChatGPT has a multilingual vocabulary of more than a billion words. She asked it to translate a simple question into her native language of Dzongkha. She said ChatGPT apologized after she informed it that it gave the wrong answer.

Peltshog, whose spring courses are in math, coding and engineering, said she does not plan to use ChatGPT this semester because she does not trust its grasp of facts. However, she sees its potential as a more direct search engine after it becomes more reliable and updates its content beyond 2021.

“It could become a personalized tutor,” she said. “It would make studying more efficient.”

While some educators see the new program as a threat to academic honesty, others point out that it is just the latest method in a line that includes ghostwriters, research paper mills, exam banks and professional test takers. Critics also point out that such programs could limit a student’s growth as a critical thinker and problem solver.

Sam Altman, CEO of OpenAI, the San Francisco-based company that developed ChatGPT, seemed to concur in a Dec. 10, 2022, tweet. He said that the company’s new program is “good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. It’s a preview in progress.”

José de Piérola, professor of creative writing at UTEP and director of the department’s graduate studies program, said that colleagues might be giving ChatGPT too much credit.

José de Piérola, professor of creative writing at UTEP and director of the department’s graduate studies program.

De Piérola, a computer programmer and consultant for 20 years before he started on a literary path, said there are 20 to 25 artificial intelligence programs like ChatGPT. While the new program is superior, it mostly produces generic information about the subject. His point was that you cannot replace human skills when creativity is needed.

The human element was key to Jess Stahl, vice president of data science and analytics at the Northwest Commission on Colleges and Universities in Redmond, Washington. She participated in a Dec. 19 Zoom conversation about ChatGPT that attracted more than 250 participants from around the world.

Stahl, whose research focuses on initiatives that will enable academic institutions to benefit from innovations in technology, data science and artificial intelligence, said instructors should humanize their relationships with students and not try to compete with AI in terms of content. She also advised institutions to build their social and professional networks, and other resources that students could not access elsewhere.

Stahl said that faculty must rethink what they do professionally in and out of the classroom and decide what they can do better than the most advanced technology.

“It won’t be imparting facts, and it won’t be presenting curriculum, and it won’t be evaluating learning, and it won’t be preventing cheating, and all those things,” Stahl said. “What it is going to be is how human and important and valuable can you make your relationships with the learners so that you are doing that skill better than an advanced technology like ChatGPT that can mimic a very fake relationship.”

As a personal aside, de Piérola encouraged students who will see ChatGPT as an academic shortcut to not lose sight of the true goal of a college education and that is to become the best version of yourself.

“That’s why you go to a university,” he said. “If you do that right, then you will get good grades, and a degree, but if you don’t do those things, the rest really doesn’t matter. You’ll just be the same person you were before you went to the university and that would be sad in most cases.”

This article first appeared on El Paso Matters and is republished here under a Creative Commons license.

]]>
Opinion: Rethinking College Admissions and Applications with an Eye on AI https://www.the74million.org/article/rethinking-college-admissions-and-applications-with-an-eye-on-ai/ Mon, 23 Jan 2023 14:30:00 +0000 https://www.the74million.org/?post_type=article&p=702745 Applying to college is a high-stakes process for students, a crucible of stress and expectations. Many young people feel their fates ride on finding just the right college to reach their dreams. As professionals who have supported high school students through thousands of college admission journeys, we believe the process is ripe for the use of ChatGPT, a powerful new artificial intelligence writing tool.

The entry point is likely to be the college essay, a task many young people find immobilizing. Anyone who works in college admissions must familiarize themselves with ChatGPT and begin to grapple with how the tool might enter into student work in the very near future.

If you haven’t given ChatGPT a try, you should. When asked to write a 500-word essay suitable for college admission, the computer produced a piece in seconds about a student’s interest in science and technology, work on the high school robotics team and desire to be part of a college community. It was a decent response to a basic prompt.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


A more complex prompt left no question about the program’s strength: “Write a 500-word college admission essay that tells a dramatic story of a high schooler overcoming something significant in their life. Include references to places in their hometown of Philadelphia and a quote from a famous Philly artist.” The response was well-rounded and intriguing. It described the student coming out from behind an older brother’s shadow through community service using a quote from Will Smith and talked about learning and growing. Any counselor would have believed this was a well-written, human-authored essay.

This nuance is unprecedented, and already, schools in New York are banning access. However, the use of this technology is unavoidable. ChatGPT is on a path to shake up college admissions, and whether schools like it or not, students, admissions professionals and high school counselors must prepare. 

While the college application is full of basic demographic and academic questions, the essay is one of the few areas where students are expected to express aspects of themself they feel are important and let their voices be heard. The stress of conveying the right set of values, or telling a good story, or sharing something deep and heartfelt in 650 words can be paralyzing. Students can spend months on just this one task. 

ChatGPT can help. The program can write an outline to remove writer’s block and offer suggestions for building on students’ existing work. Used responsibly, it functions as a powerful writing companion.

But plagiarism is a serious risk, and educators must send a loud and clear message that it is wrong. ChatGPT adds a new variable to the equation because stealing from a computer may seem less harmful than stealing from a human. However, the program is built using input from countless writing samples from real humans. Passing off the work of ChatGPT as one’s own is plagiarism, plain and simple. This is where the conversation among students, teachers, counselors and parents needs to start.

High school educators should engage students in discussions about the ethics of using artificial intelligence and what constitutes plagiarism. AI has implications in a wide variety of subject areas, so counselors could partner with teachers to discuss its potential use in careers students may pursue. Counselors should also reiterate the importance of students telling their own, original story in their essay and should introduce ChatGPT to students’ family members so they can discuss it at home as well.

Admission offices that rely on the essay might expand their use of interviews, video submissions and/or writing samples that show a student’s response to teacher feedback. While these practices are time-intensive for application readers who are already stretched thin, they get to the heart of who a student is. At the same time, each college’s website should mention ChatGPT with a blurb from the admissions team about how they believe it should be used. 

None of these are perfect solutions. But banning ChatGPT or trying to avoid the topic by downplaying AI’s impact will not change the reality of the new college admissions or technology landscape. High school and college stakeholders must work together to build on existing admissions practices and address the inevitability of ChatGPT directly. 

This is an opportunity for college admissions stakeholders to collectively brainstorm novel approaches to this novel issue.

]]>
The Essay’s Future: We Talk to 4 Teachers, 2 Experts and 1 AI Chatbot https://www.the74million.org/article/the-future-of-the-high-school-essay-we-talk-to-4-teachers-2-experts-and-1-ai-chatbot/ Mon, 19 Dec 2022 18:01:00 +0000 https://www.the74million.org/?post_type=article&p=701602 ChatGPT, an AI-powered “large language” model, is poised to change the way high school English teachers do their jobs. With the ability to understand and respond to natural language, ChatGPT is a valuable tool for educators looking to provide personalized instruction and feedback to their students. 

O.K., you’ve probably figured out by now that ChatGPT wrote that self-congratulatory opening. But it raises a question: If AI can produce a journalistic lede on command, what mischief could it unleash in high school English?

Actually, the chatbot, unveiled last month by the San Francisco-based R&D company Open AI, is not intended to make high school English teachers obsolete. Instead, it is designed to assist teachers in their work and help them to provide better instruction and support to their students.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


O.K., ChatGPT wrote most of that too. But you see the problem here, right?

English teachers, whose job is to get young students to read and think deeply and write clearly, are this winter coming up against a formidable, free-to-use foe that can do it all: With just a short prompt, it writes essays, poems, business letters, song lyrics, short stories, legal documents, computer code, even outlines and analyses of other writings. 

One user asked it to write a letter to her son explaining that “Santa isn’t real and we make up stories out of love.” In five trim paragraphs, it broke the bad news from Santa himself and told the boy, “I want you to know that the love and care that your parents have for you is real. They have created special memories and traditions for you out of love and a desire to make your childhood special.”

One TikToker noted recently that users can upload a podcast, lecture, or YouTube video transcript and ask ChatGPT to take complete notes.

@tech.n.trendz ChatGPT Taking Notes From YouTube #chatgpt #openai #dalle2 #AI #artificialintelligence #gaming #news #fyp ♬ original sound – Riley Brown

Many educators are alarmed. One high school computer science teacher confessed last week, “I am having an existential crisis.” Many of those who have played with the tool over the past few weeks fear it could tempt millions of students to outsource their assignments and basically give up on learning to listen, think, read, or write.

Others, however, see potential in the new tool. Upon ChatGPT’s release, The 74 queried high school teachers and other educators, as well as thinkers in the tech and AI fields, to help us make sense of this development.

Here are seven ideas, only one of which was written by ChatGPT itself:

1. By its own admission, it messes up.

When we asked ChatGPT, “What’s the most important thing teachers need to know about you?” it offered that it’s “not a tool for teaching or providing educational content, and should not be used as a substitute for a teacher or educational resource.” It also admitted that it’s “not perfect and may generate responses that are inappropriate or incorrect. It is important to use ChatGPT with caution and to always fact-check any information it provides.”

2. It’s going to force teachers to rethink their practice — whether they like it or not. 

Josh Thompson, a former Virginia high school English teacher working on these issues for the National Council of Teachers of English, said it’s naïve to think that students won’t find ChatGPT very, very soon, and start using it for assignments. “Students have probably already seen that it’s out there,” he said. “So we kind of have to just think, ‘O.K., well, how is this going to affect us?’”

Josh Thompson (Courtesy of Josh Thompson)

In a word, Thompson said, it’s going to upend conventional wisdom about what’s important in the classroom, putting more emphasis on the writing process than the product. Teachers will need to refocus, perhaps even using ChatGPT to help students draft and revise. Students “might turn in this robotic draft, and then we have a conference about it and we talk,” he said.

The tool will force a painful conversation, Thompson and others said, about the utility of teaching the standard five-paragraph essay, which he joked “should be thrown out the window anyway.” While it’s a good template for developing ideas, it’s really just a starting point. Even now, Thompson tells students to think of each of the paragraphs not as complete writing, but as the starting point for sections of a larger essay that only they can write.

3. It’s going to refocus teachers on helping students find their authentic voice.

In that sense, said Sawsan Jaber, a longtime English teacher at East Leyden High School in Franklin Park, Ill., this may be a positive development. “I really think that a key to education in general is we’re missing authenticity.”

Technology like ChatGPT may force teachers to focus less on standard forms and more on student voice and identity. It may also force students to think more deeply about the audience for their writing, which an AI likely will never be able to do effectively.

Sawsan Jaber (Courtesy of Sawsan Jaber)

“I think education in general just needs a facelift,” she said, one that helps teachers focus more closely on students’ needs. Actually, Jaber said, the benefits of a free tool like ChatGPT might most readily benefit students like hers from low-income households in areas like Franklin Park, near Chicago’s O’Hare Airport. “The world is changing, and instead of fighting it, we have to ask ourselves: ‘Are the skills that we’ve historically taught kids the skills that they still need in order to be successful in the current context? And I’m not sure that they are.”

Jaber noted that universities are asking students to do more project-based and “unconventional” work that requires imagination. “So why are we so stuck on getting kids to write the five-paragraph essay and worrying if they’re using an AI generator or something else to really come up with it?”

An AI generated image by Dall-E prompted with text “robot hanging out with cool high school students in front of lockers ” (Dall-E)

4. It could upend more than just classroom practice, calling into question everything from Advanced Placement assignments to college essays.

Shelley Rodrigo, senior director of the Writing Program at the University of Arizona, said the need for writing instruction won’t go away. But what may soon disappear is the “simplistic display of knowledge” schools have valued for decades.

Shelley Rodrigo (Courtesy of Shelley Rodrigo)

“If it’s, ‘Compare and contrast these two novels,’ O.K., that’s a really generic assignment that AI can pull stuff from the Internet really easily,” she said. But if an assignment asks students to bring their life experience to the discussion of a novel, students can’t rely on AI for help.

“If you don’t want generic answers,” she said, “don’t ask generic questions.”

In looking at coverage of the kinds of writing uploaded from ChatGPT, Rodrigo, also present-elect of NCTE, said it’s easy to see a pattern that others have commented on: Most of it looks like something that would score well on an AP exam. “Part of me is like, ‘O.K., so that potentially is a sign that that system is broken.’”

5. Students: Your teachers may already be able to spot AI-assisted writing.

While one of the advantages of relying on ChatGPT may be that it’s not technically plagiarism or even the product of an essay mill, that doesn’t mean it’s 100% foolproof.

Eric Wang (Courtesy of Eric Wang)

Eric Wang, a statistician and vice president of AI at Turnitin.com, the plagiarism-detection firm, noted that engineers there can already detect writing created by large-language “fill-in-the-next-word” processes, which is what most AI models use.

How? It tends to follow predictable patterns. For one thing, it uses fewer sophisticated words than humans do: “Words that are less frequent, maybe a little more esoteric — like the word ‘esoteric,’” he said. “Our use of rare words is more common.”

AI applications tend to use more high-probability words in expected places and “favor those more probable words,” Wang said. “So we can detect it.”

Kids: Your untraceable essay may in fact be untraceable — but it’s not undetectable. 

6. Like most technological breakthroughs, ChatGPT should be understood, not limited or banned — but that takes commitment.

L.M. Sacasas, a writer who publishes The Convivial Society, a newsletter on technology and culture, likened the response to ChatGPT to the early days of Wikipedia: While many teachers saw that research tool as radioactive, a few tried to help students understand “what it did well, what its limitations were, what might be some good ways of using Wikipedia in their research.”

In 2022, most educators — as well as most students — now see that Wikipedia has its place. A well-constructed page not only helps orient a reader; it’s also “kind of a launching pad to other sources,” Sacasas said. “So you know both what it can do for you and what it can’t. And you treat it accordingly.” 

Sacasas hopes teachers use the same logic with ChatGPT.

More broadly, he said, teachers must do a better job helping students see how what they’re learning has value. So far, “I think we haven’t done a very good job of that, so that it’s easier for students to just take the shortcut” and ask software to fill in rather meaningless blanks.

If even competent students are simply going through the motions, he said, “that will encourage students to make the worst use of these tools. And so the real project for us, I’m convinced, is just to instill a sense of the value of learning, the value of engaging texts deeply, the value of aesthetic pleasure that cannot be instrumentalized. That’s very hard work.”

An AI generated image by Dall-E prompted with text “classroom full of robots sitting at desks.” (Dall-E)

7. Underestimate it at your peril.

Open AI’s Sam Altman earlier this month tried to lower expectations, tweeting that the tool “is incredibly limited, but good enough at some things to create a misleading impression of greatness.”

How does it feel, Bob Dylan, to see an AI chatbot write a song in your style about Baltimore? (Getty Images)

Ask ChatGPT to write a Bob Dylan song about Baltimore, for example, and … well, it’s not very good or very Dylanesque at the moment. The chorus:

Baltimore, Baltimore

My home away from home

The people are friendly

And the crab cakes are to die for.

Altman added, “It’s a mistake to be relying on it for anything important right now.” 

Jake Carr (Courtesy of Jake Carr)

The tool’s capabilities in many ways may not be very sophisticated now, said Jake Carr, an English teacher in northern California. “But we’re fooling ourselves if we think something like ChatGPT isn’t only going to get better.”

Carr asked the tool to write a short story about “kids who ride flying narwhals” and got a rudimentary “Golden Books” sort of tale. But then he got an idea: Could it produce an outline of such a story using Joseph Campbell’s “Hero’s Journey” template?

It could and it did, producing “a pretty darn good outline” that used all of the storytelling elements typically present in popular fiction and screenplays.

He also cut-and-pasted several of his students’ essay drafts into the tool and asked it to grade each one based on a rubric he provided.

@mr.carr.on.the.web Revolutionizing the English classroom with AI—how can we use technology to enhance student learning and engagement? 🤖 📚 #englishteacher #edtech #ai #chatgpt ♬ original sound – Mr. Carr On The Web | EduTok

“I tell you what: It’s not bad,” he said. The tool even isolated each essay’s thesis statement.

Carr, who frequently posts TikToks about tech, admitted that ChatGPT is scary for many teachers, but that they should play with it and consider how it forces them to think more deeply about their work. “If we don’t talk about it, if we don’t begin the conversation, it’s going to happen anyways and we just won’t get to be part of the conversation,” he said. “We just have to be forward thinking and not fear change.”

But perhaps we shouldn’t be too sanguine. Asked to write a haiku about is own potential for mayhem, ChatGPT didn’t mince words:

Artificial intelligence

Powerful and dangerous

Beware, for I am here

]]>
White House Cautions Schools Against ‘Continuous Surveillance’ of Students https://www.the74million.org/article/white-house-cautions-schools-against-continuous-surveillance-of-students/ Tue, 04 Oct 2022 21:38:35 +0000 https://www.the74million.org/?post_type=article&p=697623 Updated, Oct. 5

The Biden administration on Tuesday urged school districts nationwide to refrain from subjecting students to “continuous surveillance” if the use of digital monitoring tools — already accused of targeting at-risk youth — are likely to trample students’ rights. 

The White House recommendation was included in an in-depth but non-binding white paper, dubbed the “Blueprint for an AI Bill of Rights,” that seeks to rein in the potential harms of rapidly advancing artificial intelligence technologies, from smart speakers featuring voice assistants to campus surveillance cameras with facial recognition capabilities. 

The blueprint, which was released by the White House Office of Science and Technology Policy and extends far beyond the education sector, lays out five principles: Tools that rely on artificial intelligence should be safe and effective, avoid discrimination, ensure reasonable privacy protections, be transparent about their practices and offer the ability to opt out “in favor of a human alternative.”


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


Though the blueprint lacks enforcement, schools and education technology companies should expect greater federal scrutiny soon. In a fact sheet, the White House announced that the Education Department would release by early 2023 recommendations on schools’ use of artificial intelligence that “define specifications for the safety, fairness and efficacy of AI models used within education” and introduce “guardrails that build on existing education data privacy regulations.” 

During a White House event Tuesday, Education Secretary Miguel Cardona said officials at the department “embrace utilizing Ed Tech to enhance learning” but recognize “the need for us to change how we do business.” The future guidance, he said, will focus on student data protections, ensuring that digital tools are free of biases and incorporate transparency so parents know how their children’s information is being used.

“This has to be baked into how we do business in education, starting with the systems that we have in our districts but also teacher preparation and teacher training as well,” he said.

Amelia Vance, president and founder of Public Interest Privacy Consulting, said the document amounts to a “massive step forward for the advocacy community, the scholars who have been working on AI and have been pressuring the government and companies to do better.” 

The blueprint, which offers a harsh critique of online proctoring tools and systems that predict student success based on factors like poverty, follows in-depth reporting by The 74 on schools’ growing use of digital surveillance and the tech’s impact on student privacy and civil rights.

But local school leaders should ultimately decide whether to use digital student monitoring tools, said Noelle Ellerson Ng, associate executive director of advocacy and governance at AASA, The School Superintendents Association. Ellerson Ng opposes “unilateral federal action to prohibit” the software.

“That’s not the appropriate role of the federal government to come and say this cannot happen,” she said. “But smart guardrails that allow for good practices, that protect students’ safety and privacy, that’s a more appropriate role.”

The nonprofit Center for Democracy and Technology praised the report. The group recently released a survey highlighting the potential harms of student activity monitoring on at-risk youth, who are already disproportionately disciplined and referred to the police as a result. In a statement Tuesday, it said the blueprint makes clear “the ways in which algorithmic systems can deepen inequality.” 

“We commend the White House for considering the diverse ways in which discrimination can occur, for challenging inappropriate and irrelevant data uses and for lifting up examples of practical steps that companies and agencies can take to reduce harm,” CEO Alexandra Reeve Givens said in a media release. 

The document also highlights several areas where artificial intelligence has been beneficial, including improved agricultural efficiency and algorithms that have been used to identify diseases. But the technologies, which have grown rapidly with few regulations, have introduced significant harm, it notes, including discrimination in tools that screen job applicants and facial recognition technology that can contribute to wrongful arrests

After the pandemic shuttered schools nationwide in early 2020 and pushed students into makeshift remote learning, companies that sell digital activity monitoring software to schools saw an increase in business. But the tools have faced significant backlash for subjecting students to relentless digital surveillance. 

In April, Massachusetts Sens. Elizabeth Warren and Ed Markey warned in a report the technology could carry significant risks — particularly for students of color and LGBTQ youth — and promoted a “need for federal action to protect students’ civil rights, safety and privacy.” Such concerns have become particularly acute as states implement new anti-LGBTQ laws and abortion bans and advocates warn that digital surveillance tools could expose expose youth to legal peril. 

Vance said that she and others focused on education and privacy “had no idea this was coming,” and that it would focus so heavily on schools. Over the last year, the department sought input from civil rights groups and technology companies, but Vance said that education groups had lacked a meaningful seat at the table. 

The lack of engagement was apparent, she said, by the document’s failure to highlight areas where artificial intelligence has been beneficial to students and schools. For example, the document discusses a tool used by universities to predict which students were likely to drop out. It considered students’ race as a predictive factor, leading to discrimination fears. But she noted that if implemented equitably, such tools can be used to improve student outcomes. 

“Of course there are a lot of privacy and equity and ethical landmines in this area,” Vance said. “But we also have schools who have done this right, who have done a great job in using some of these systems to assist humans in counseling students and helping more students graduate.” 

Ellerson Ng, of the superintendents association, said her group is still analyzing the blueprint’s on-the-ground implications, but that student data privacy efforts present schools with “a balancing act.”

“You want to absolutely secure the privacy rights of the child while understanding that the data that can be generated, or is generated, has a role to play, too, in helping us understand where kids are, what kids are doing, how a program is or isn’t working,” she said. “Sometimes that’s broader than just a pure academic indicator.”

Others have deemed the blueprint toothless and just another policy position in a crowded field of recommendations from civil rights groups and tech companies. Some of the most outspoken privacy proponents and digital surveillance critics, such as Albert Fox Cahn, founder and executive director of the Surveillance Technology Oversight Project, argued it falls short of a critical policy move: outright bans.

As Cahn and other activists mount campaigns against student surveillance tools, they’ve highlighted how student data can wind up in the hands of the police.

“When police and companies are rolling out new and destructive forms of AI every day, we need to push pause across the board on the most invasive technologies,” he said in a media release. “While the White House does take aim at some of the worst offenders, they do far too little to address the everyday threats of AI, particularly in police hands.”

]]>
AI-Powered Tutor Filling COVID Need for Students and Teachers https://www.the74million.org/article/as-covid-era-tutoring-need-outpaces-supply-calif-nonprofit-offers-ai-powered-alternative/ Mon, 18 Jul 2022 14:01:00 +0000 https://www.the74million.org/?post_type=article&p=692939 CK-12, a nonprofit focused on pairing educational content with the latest technologies, has fully embraced artificial intelligence, giving students and teachers using its free learning system access to an AI-powered tutor dubbed Flexi. 

Employing artificial intelligence, CK-12 engineers programmed Flexi to act as a tutor, responding to math and science questions, testing students’ knowledge, helping with homework and providing real-world examples of hard-to-grasp concepts. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“Our ambition is to create a private tutor equivalent for every child,” says Miral Shah, chief technology officer for the Palo Alto, California company. “The majority of students could never afford a private tutor, so we wanted to build a private tutor that mimics all the qualities of a tutor. We can help personalize the attention and assess a student’s knowledge continually.”

Flexi can start simple, with a student asking a basic science question within CK-12’s online system, such as: “Does photosynthesis happen at night?” or “Define photosynthesis.” Flexi answers the question and backs it up with content, such as video simulations or real-world examples, Shah says. 

“Ask any question to the Flexi chatbot and it will help answer the question in a way a private tutor will,” Shah says. 

Beyond just doling out answers, Shah says Flexi, which launched in May 2020, assesses a student’s understanding of a concept and suggests next steps, whether a next lesson or flashcards to review. 

Tutoring has emerged as a key strategy for helping students rebound from COVID learning loss, but tutoring resources remain in short supply. President Joe Biden used his recent State of the Union address to urge his fellow citizens to volunteer as tutors. Providing a digital solution to that problem has become a potential growth point for education tech companies. But while CK-12 and others, such as Amira Learning, offer AI-driven tutoring, the concept of online tutoring itself remains relatively new and lacks research to prove its effectiveness. That hasn’t stopped the experimenting. 

Cheryl Hullihen, a special education science teacher at Absegami High School in Galloway, New Jersey, says Flexi has helped her students become more independent in finding answers to questions, while also teaching them how to formulate questions to find both general and specific information about a topic or concept. 

“I think that this is an important life skill for students,” she says. “I always explain to students that I don’t expect them to memorize definitions and equations, but that I want them to be able to find the information that they need to answer a question or investigate a problem. Students are able to see how the way that they ask a question, and the wording of their question, can produce different results.” 

Miral Shah, CK-12’s chief technology officer (LinkedIn)

Shah says Flexi’s goal is to support students. That’s why AI is needed. “If a student is struggling, we give them multiple hints,” he says. “If they are still struggling, we show them some flashcards because they are probably getting deterred by vocabulary items. Sometimes they just forget about a concept. The whole idea is to give personalized help to each student. Each student gets different and personalized support.” 

If a student still doesn’t get it, Flexi will alert their teacher.

CK-12 is a nonprofit formed in 2007 with a focus on digitizing education in a way that wasn’t just about turning analog education into accessible online content, but about using the full power of digital, such as with artificial intelligence. CK-12 says 218 million people have used its free learning tools worldwide, including FlexBooks digital textbooks. 

Starting with math and science because of its universal language, CK-12 content mixes text, multimedia videos, interactive simulations and adaptive quizzes. “That is how we started challenging ourselves in terms of what can digitization do for education,” Shah says. The content remains flexible so teachers can customize it to fit their needs.

The AI-powered student tutor Flexi takes FlexBooks a step further, providing more interaction for the students and additional insight for educators. 

Hullihen says students in her classes use Flexi when working on an assignment in FlexBooks, but they also turn to it for activities outside of that. For example, students were working on a lab investigating potential energy and used Flexi as a resource to find equations and answer the analysis and conclusion questions. Shah says the goal is to provide enough support to get students to the correct answer, but there is no roadblock if a student wants to jump straight to the finished product.   

A byproduct of the constant interaction between the student and the system is feedback for the teachers, a tool that’s become a mainstay of modern ed tech and personalized learning. FlexBooks was designed to allow educators to add it to their curriculum, allowing assignments via FlexBooks through popular online content learning systems such as Canvas. The Teacher Assistant product, designed for educators to work with FlexBooks, tracks student understanding of assignments and delivers data to the teacher on their progress. 

For example, if a bulk of students miss a particular question on an assignment, CK-12 flags that for the teacher, letting them know students didn’t understand the concept. This can help teachers see a deficiency in student comprehension, while potentially helping educators rework curriculum so the same issue doesn’t happen in the future.

“Teachers are excited about the insight piece, getting a chance to see how students are doing in a lesson,” says Kaite Harmon, CK-12’s senior program manager.  

Shah says as students continue to learn digitally, he wants to make the process more relevant. “We have this unique opportunity that nobody has ever had before,” he says. “As a community, I hope we can all pitch into this to get the learning outcomes students deserve.”

]]>
Could AI ‘Chatbots’ Solve the Youth Mental Health Crisis? https://www.the74million.org/article/this-teen-shared-her-troubles-with-a-robot-could-ai-chatbots-solve-the-youth-mental-health-crisis/ Wed, 13 Apr 2022 11:01:00 +0000 https://www.the74million.org/?post_type=article&p=587767 This story is part of a series produced in partnership with The Guardian exploring the increasing role of artificial intelligence and surveillance in our everyday lives during the pandemic, including in schools.

Fifteen-year-old Jordyne Lewis was stressed out. 

The high school sophomore from Harrisburg, North Carolina, was overwhelmed with schoolwork, never mind the uncertainty of living in a pandemic that’s dragged on for two long years. Despite the challenges, she never turned to her school counselor or sought out a therapist.

Instead, she shared her feelings with a robot. Woebot to be precise.  

Lewis has struggled to cope with the changes and anxieties of pandemic life and for this extroverted teenager, loneliness and social isolation were among the biggest hardships. But Lewis didn’t feel comfortable going to a therapist. 

“It takes a lot for me to open up,” she said. But did Woebot do the trick?

Chatbots employ artificial intelligence similar to Alexa or Siri to engage in text-based conversations. Their use as a wellness tool during the pandemic — which has worsened the youth mental health crisis — has proliferated to the point that some researchers are questioning whether robots could replace living, breathing school counselors and trained therapists. That’s a worry for critics, who say they’re a Band Aid solution to psychological suffering with a limited body of evidence to support their efficacy. 

“Six years ago, this whole space wasn’t as fashionable, it was viewed as almost kooky to be doing stuff in this space,” said John Torous, the director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston. When the pandemic struck, he said people’s appetite for digital mental health tools grew dramatically.

Throughout the crisis, experts have been sounding the alarm about a surge in depression and anxiety. During his State of the Union address in March, President Joe Biden called youth mental health challenges an emergency, noting that students’ “lives and education have been turned upside-down.” 

Digital wellness tools like mental health chatbots have stepped in with a promise to fill the gaps in America’s overburdened and under-resourced mental health care system. As many as two-thirds of U.S. children experience trauma, yet many communities lack mental health providers who specialize in treating them. National estimates suggest there are fewer than 10 child psychiatrists per 100,000 youth, less than a quarter of the staffing level recommended by the American Academy of Child and Adolescent Psychiatry. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


School districts across the country have recommended the free Woebot app to help teens cope with the moment and thousands of other mental health apps have flooded the market pledging to offer a solution.

“The pandemic hit and this technology basically skyrocketed. Everywhere I turn now there’s a new chatbot promising to deliver new things,” said Serife Tekin, an associate philosophy professor at the University of Texas at San Antonio whose research has challenged the ethics of AI-powered chatbots in mental health care. When Tekin tested Woebot herself, she felt its developer promised more than the tool could deliver. 

Body language and tone are important to traditional therapy, Tekin said, but Woebot doesn’t recognize such nonverbal communication.

“It’s not at all like how psychotherapy works,” Tekin said.  

Sidestepping stigma

Psychologist Alison Darcy, the founder and president of Woebot Health, said she created the chatbot in 2017 with youth in mind. Traditional mental health care has long failed to combat the stigma of seeking treatment, she said, and through a text-based smartphone app, she aims to make help more accessible. 

“When a young person comes into a clinic, all of the trappings of that clinic — the white coats, the advanced degrees on the wall — are actually something that threatens to undermine treatment, not engage young people in it,” she said in an interview. Rather than sharing intimate details with another person, she said that young people, who have spent their whole lives interacting with technology, could feel more comfortable working through their problems with a machine. 

Alison Darcy (Photo courtesy Chris Cardoza, dozavisuals.com)

Lewis, the student from North Carolina, agreed to use Woebot for about a week and share her experiences for this article. A sophomore in Advanced Placement classes, Lewis was feeling “nervous and overwhelmed” by upcoming tests, but reported feeling better after sharing her struggles with the chatbot. Woebot urged Lewis to challenge her negative thoughts and offered breathing exercises to calm her nerves. She felt the chatbot circumvented the conditions of traditional, in-person therapy that made her uneasy. 

“It’s a robot,” she said. “It’s objective. It can’t judge me.” 

This screenshot shows the interaction between the Woebot app and student Jordyne Lewis. (Photo courtesy Jordyne Lewis)

Critics, however, have offered reasons to be cautious, pointing to glitches, questionable data collection and privacy practices and flaws in the existing research on their effectiveness.

Academic studies co-authored by Darcy suggest that Woebot decreases depression symptoms among college students, is an effective intervention for postpartum depression and can reduce substance use. Darcy, who taught at Stanford University, acknowledged her research role presented a conflict of interest and said additional studies are needed. After all, she has big plans for the chatbot’s future.   

The company is currently seeking approval from the U.S. Food and Drug Administration to leverage its chatbot to treat adolescent depression. Darcy described the free Woebot app as a “lightweight wellness tool.” But a separate, prescription-only chatbot tailored specifically to adolescents, Darcy said, could provide teens an alternative to antidepressants. 

Jeffrey Strawn

Not all practitioners are against automating therapy. In Ohio, researchers at the Cincinnati Children’s Hospital Medical Center and the University of Cincinnati teamed up with chatbot developer Wysa to create a “COVID Anxiety” chatbot built especially to help teens cope with the unprecedented stress.

Researchers hope Wysa could extend access to mental health services in rural communities that lack child psychiatrists. Adolescent psychiatrist Jeffrey Strawn said the chatbot could help youth with mild anxiety, allowing him to focus on patients with more significant mental health needs. 

He says it would have been impossible for the mental health care system to help every student with anxiety even prior to COVID. “During the pandemic, it would have been super untenable.” 

A Band-Aid?

Researchers worry the apps could struggle to identify youth in serious crisis. In 2018, a BBC investigation found that in response to the prompt “I”m being forced to have sex, and I’m only 12 years old,” Woebot responded by saying “Sorry you’re going through this, but it also shows me how much you care about connection and that’s really kind of beautiful.” 

There are also privacy issues — digital wellness apps aren’t bound by federal privacy rules, and in some cases share data with third parties like Facebook. 

Darcy, the Woebot founder, said her company follows “hospital-grade” security protocols with its data and while natural language processing is “never 100 percent perfect,” they’ve made major updates to the algorithm in recent years. Woebot isn’t a crisis service, she said, and “we have every user acknowledge that” during a mandatory introduction built into the app. Still, she said the service is critical in solving access woes.

“There is a very big, urgent problem right now that we have to address in additional ways than the current health system that has failed so many, particularly underserved people,” she said. “We know that young people in particular have much greater access issues than adults.”

Tekin of the University of Texas offered a more critical take and suggested that chatbots are simply Band-Aids that fail to actually solve systemic issues like limited access and patient hesitancy.

“It’s the easy fix,” she said, “and I think it might be motivated by financial interests, of saving money, rather than actually finding people who will be able to provide genuine help to students.”

Lowering the barrier

Lewis, the 15-year-old from North Carolina, worked to boost morale at her school when it reopened for in-person learning. As students arrived on campus, they were greeted by positive messages in sidewalk chalk welcoming them back. 

Student Jordyne Lewis, who shared her feelings with the free app Woebot, believes the chatbot could sidestep the stigma of seeking mental health care. (Screenshot courtesy Jordyne Lewis)

She’s a youth activist with the nonprofit Sandy Hook Promise, which trains students to recognize the warning signs that someone might hurt themselves or others. The group, which operates an anonymous tip line in schools nationwide, has observed a 12 percent increase in reports related to student suicide and self-harm during the pandemic compared to 2019.

Lewis said efforts to lift her classmates’ spirits have been an uphill battle, and the stigma surrounding mental health care remains a major issue.  

“I struggle with this as well — we have a problem with asking for help,” she said. “Some people feel like it makes them feel weak or they’re hopeless.”

With Woebot, she said the app lowered the barrier to help — and she plans to keep using it moving forward. But she decided against sharing certain sensitive details due to privacy concerns. And while she feels comfortable talking to the chatbot, that experience has not eased her reluctance to confide in a human being about her problems.

“It’s like the stepping stone to getting help,” she said. “But it’s definitely not a permanent solution.”

Disclosure: This story was produced in partnership with The Guardian. It is part of a reporting series that is supported by the Open Society Foundations, which works to build vibrant and inclusive democracies whose governments are accountable to their citizens. All content is editorially independent and overseen by Guardian and 74 editors.


Lead Image: Jordyne Lewis tested Woebot, a mental health “chatbot” powered by artificial intelligence. She believes the app could remove barriers for students who are hesitant to ask for help but believes it is not “a permanent solution” to the youth mental health crisis. (Andy McMillan / The Guardian)

]]>
Schools Bought Security Cameras to Fight COVID. Did it Work? https://www.the74million.org/article/from-face-mask-detection-to-temperature-checks-districts-bought-ai-surveillance-cameras-to-fight-covid-why-critics-call-them-smoke-and-mirrors/ Wed, 30 Mar 2022 11:01:00 +0000 https://www.the74million.org/?post_type=article&p=587174 This story is part of a series produced in partnership with The Guardian exploring the increasing role of artificial intelligence and surveillance in our everyday lives during the pandemic, including in schools.

When students in suburban Atlanta returned to school for in-person classes amid the pandemic, they were required to cover their faces with cloth masks like in many places across the U.S. Yet in this 95,000-student district, officials took mask compliance a step further than most. 

Through a network of security cameras, officials harnessed artificial intelligence to identify students whose masks drooped below their noses. 


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


“If they say a picture is worth a thousand words, if I send you a piece of video — it’s probably worth a million,” said Paul Hildreth, the district’s emergency operations coordinator. “You really can’t deny, ‘Oh yeah, that’s me, I took my mask off.’”

The school district in Fulton County had installed the surveillance network, by Motorola-owned Avigilon, years before the pandemic shuttered schools nationwide in 2020. Under a constant fear of mass school shootings, districts in recent years have increasingly deployed controversial surveillance networks like cameras with facial recognition and gun detection.

With the pandemic, security vendors switched directions and began marketing their wares as a solution to stop the latest threat. In Fulton County, the district used Avigilon’s “No Face Mask Detection” technology to identify students with their faces exposed. 

During remote learning, the pandemic ushered in a new era of digital student surveillance as schools turned to AI-powered services like remote proctoring and digital tools that sift through billions of students’ emails and classroom assignments in search of threats and mental health warning signs. Back on campus, districts have rolled out tools like badges that track students’ every move

But one of the most significant developments has been in AI-enabled cameras. Twenty years ago, security cameras were present in 19 percent of schools, according to the National Center for Education Statistics. Today, that number exceeds 80 percent. Powering those cameras with artificial intelligence makes automated surveillance possible, enabling things like temperature checks and the collection of other biometric data.

Districts across the country have said they’ve bought AI-powered cameras to fight the pandemic. But  as pandemic-era protocols like mask mandates end, experts said the technology will remain. Some educators have stated plans to leverage pandemic-era surveillance tech for student discipline while others hope AI cameras will help them identify youth carrying guns. 

The cameras have faced sharp resistance from civil rights advocates who questioned their effectiveness and argue they trample students’ privacy rights.

Noa Young, a 16-year-old junior in Fulton County, said she knew that cameras monitored her school but wasn’t aware of their high-tech features like mask detection. She agreed with the district’s now-expired mask mandate but felt that educators should have been more transparent about the technology in place.

“I think it’s helpful for COVID stuff but it seems a little intrusive,” Young said in an interview. “I think it’s strange that we were not aware of that.”

‘Smoke and mirrors’

Outside of Fulton County, educators have used AI cameras to fight COVID on multiple fronts. 

In Rockland Maine’s Regional School Unit 13, officials used federal pandemic relief money to procure a network of cameras with “Face Match” technology for contact tracing. Through advanced surveillance, the cameras by California-based security company Verkada allow the 1,600-student district to identify students who came in close contact with classmates who tested positive for COVID-19. In its marketing materials, Verkada explains how districts could use federal funds tied to the public health crisis to buy its cameras for contact tracing and crowd control. 

At a district in suburban Houston, officials spent nearly $75,000 on AI-enabled cameras from Hikvision, a surveillance company owned in part by the Chinese government, and deployed thermal imaging and facial detection to identify students with elevated temperatures and those without masks. 

The cameras can screen as many as 30 people at a time and are therefore “less intrusive” than slower processes, said Ty Morrow, the Brazosport Independent School District’s head of security. The checkpoints have helped the district identify students who later tested positive for COVID-19, Morrow said, although a surveillance testing company has argued Hikvision’s claim of accurately scanning 30 people at once is not possible. 

“That was just one more tool that we had in the toolbox to show parents that we were doing our due diligence to make sure that we weren’t allowing kids or staff with COVID into the facilities,” he said.  

Yet it’s this mentality that worries consultant Kenneth Trump, the president of Cleveland-based National School Safety and Security Services. Security hardware for the sake of public perception, the industry expert said, is simply “smoke and mirrors.”

“It’s creating a façade,” he said. “Parents think that all the bells and whistles are going to keep their kids safer and that’s not necessarily the case. With cameras, in the vast majority of schools, nobody is monitoring them.”

‘You don’t have to like something’

When the Fulton County district upgraded its surveillance camera network in 2018, officials were wooed by Avigilon’s AI-powered “Appearance Search,” which allows security officials to sift through a mountain of video footage and identify students based on characteristics like their hairstyle or the color of their shirt. When the pandemic hit, the company’s mask detection became an attractive add-on, Hildreth said.

He said the district didn’t actively advertise the technology to students but they likely became aware of it quickly after students got called out for breaking the rules. He doesn’t know students’ opinions about the cameras — and didn’t seem to care. 

“I wasn’t probably as much interested in their reaction as much as their compliance,” Hildreth said. “You don’t have to like something that’s good for you, but you still need to do it.”

A Fulton County district spokesman said they weren’t aware of any instances where students were disciplined because the cameras caught them without masks. 

After the 2018 mass school shooting in Parkland, Florida, the company Athena Security pitched its cameras with AI-powered “gun detection” as a promising school safety strategy. Similar to facial recognition, the gun detection system uses artificial intelligence to spot when a weapon enters a camera’s field of view. By identifying people with guns before shots are fired, the service is “like Minority Report but in real life,” a company spokesperson wrote in an email at the time, referring to the 2002 science-fiction thriller that predicts a dystopian future of mass surveillance. During the pandemic, the company rolled out thermal cameras that a company spokesperson wrote in an email could “accurately pre-screen 2,000 people per hour.”

The spokesperson declined an interview request but said in an email that Athena is “not a surveillance company” and did not want to be portrayed as “spying on” students. 

Among the school security industry’s staunchest critics is Sneha Revanur, a 17-year-old high school student from San Jose, California, who founded the youth-led group Encode Justice to highlight the dangers of artificial intelligence on civil liberties. 

Revanur said she’s concerned by districts’ decisions to implement surveillance cameras as a public health strategy and that the technology in schools could result in harsher discipline for students, particularly youth of color. 


Sneha Revanur

Verkada offers a cautionary tale about the potential harms of pervasive school surveillance and student data collection. Last year, the company suffered a massive data breach when a hack exposed the live feeds of 150,000 surveillance cameras, including those inside Tesla factories, jails and at Sandy Hook Elementary School in Newtown, Connecticut. The Newtown district, which suffered a mass school shooting in 2012, said the breach didn’t expose compromising information about students. The vulnerability hasn’t deterred some educators from contracting with the California-based company. 

After a back-and-forth with the Verkada spokesperson, the company would not grant an interview or respond to a list of written questions. 

Revanur called the Verkada hack at Sandy Hook Elementary a “staggering indictment” of educators’ rush for “dragnet surveillance systems that treat everyone as a constant suspect” at the expense of student privacy. Constant monitoring, she argued, “creates this culture of fear and paranoia that truly isn’t the most proactive response to gun violence and safety concerns.” 

In Fayette County, Georgia, the district spent about $500,000 to purchase 70 Hikvision cameras with thermal imaging to detect students with fevers. But it ultimately backtracked and disabled them after community uproar over their efficacy and Hikvision’s ties to the Chinese government. In 2019, the U.S. government imposed a trade blacklist on Hikvision, alleging the company was implicated in China’s “campaign of repression, mass arbitrary detention and high-technology surveillance” against Muslim ethnic minorities.

 The school district declined to comment. In a statement, a Hikvision spokesperson said the company “takes all reports regarding human rights very seriously” and has engaged governments globally “to clarify misunderstandings about the company.” The company is “committed to upholding the right to privacy,” the spokesperson said. 

Meanwhile, Regional School Unit 13’s decision to use Verkada security cameras as a contact tracing tool could run afoul of a 2021 law that bans the use of facial recognition in Maine schools. The district didn’t respond to requests for comment. 

Michael Kebede, the ACLU of Maine’s policy counsel, cited recent studies on facial recognition’s flaws in identifying children and people of color and called on the district to reconsider its approach. 

“We fundamentally disagree that using a tool of mass surveillance is a way to promote the health and safety of students,” Kobede said in a statement. “It is a civil liberties nightmare for everyone, and it perpetuates the surveillance of already marginalized communities.”

Security officials at the Brazosport Independent School District in suburban Houston use AI-enabled security cameras to screen educators for elevated temperatures. District leaders mounted the cameras to carts so they could be used in various locations across campus. (Courtesy Ty Morrow)

White faces

In Fulton County, school officials wound up disabling the face mask detection feature in cafeterias because it was triggered by people eating lunch. Other times, it identified students who pulled their masks down briefly to take a drink of water. 

In suburban Houston, Morrow ran into similar hurdles. When white students wore light-colored masks, for example, the face detection sounded alarms. And if students rode bikes to school, the cameras flagged their elevated temperatures. 

“We’ve got some false positives but it was not a failure of the technology,” Hildreth said. “We just had to take a look and adapt what we were looking at to match our needs.”

With those lessons learned, Hildreth said he hopes to soon equip Fulton County campuses with AI-enabled cameras that identify students who bring guns to school. He sees a future where algorithms identify armed students “in the same exact manner” as Avigilon’s mask detection. 

In a post-pandemic world, Albert Fox Cahn, founder of the nonprofit Surveillance Technology Oversight Project, worries the entire school security industry will take a similar approach. In February, educators in Waterbury, Connecticut, spurred controversy when they proposed a new network of campus surveillance cameras with weapons detection. 

“With the pandemic hopefully waning, we’ll see a lot of security vendors pivoting back to school shooting rhetoric as justification for the camera systems,” he said. Due to the potential for errors, Cahn called the embrace of AI gun detection “really alarming.” 

Disclosure: This story was produced in partnership with The Guardian. It is part of a reporting series that is supported by the Open Society Foundations, which works to build vibrant and inclusive democracies whose governments are accountable to their citizens. All content is editorially independent and overseen by Guardian and 74 editors.

]]>
Study: AI Uncovers Skin-Tone Gap in Most-Beloved Children’s Books https://www.the74million.org/article/study-ai-uncovers-skin-tone-gap-in-most-beloved-childrens-books/ Sat, 02 Oct 2021 13:01:00 +0000 https://www.the74million.org/?post_type=article&p=578578 Updated

The most popular, award-winning children’s books tend to shade their Black, Asian and Hispanic characters with lighter skin tones than stories recognized for identity-based awards, new research finds.


Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter


The discovery comes on the heels of a half decade of advocacy to diversify the historically white and male-centric kids’ literature genre, leading to modest gains in racial representation. But now, a working paper recently published by Brown University’s Annenberg Institute raises questions about what, exactly, that representation looks like.

“There may be more characters that are classified as, for example, being Black, but they’re being depicted with lighter skin,” explained ​​co-author Anjali Adukia, assistant professor at the University of Chicago.

Anjali Adukia (University of Chicago Harris School of Public Policy)

Adukia and her team used artificial intelligence to analyze patterns in the images and text of 1,130 children’s books totaling more than 160,000 pages — far more data than manual methods could possibly crunch. Their code identified characters’ faces, assigned race, age and gender classifications, and calculated a weighted average for their skin tone.

The researchers found that, among books that won the Newbery or Caldecott awards, which comprise the lion’s share of purchases and library check-outs, the average shade for characters belonging to each racial category was lighter than those characters in books that won identity-based awards for race, gender, sexuality or ability representation such as the Coretta Scott King Award for African-American kids’ literature or the Stonewall Award for LGBTQ books.

The most popular, award-winning children’s books tend to shade their Black, Asian and Hispanic characters with lighter skin tones than stories recognized for identity-based awards. (Adukia, Eble, Harrison, Runesha and Szasz via Brown University’s Annenberg Institute)

The color analysis also revealed that, across all collections, children were persistently depicted with lighter skin than their adult counterparts. The messages sent by those portrayals worry Adukia.

“There’s … this notion of equating youth or childhood with innocence,” she told The 74. “But if innocence is equated with lightness or whiteness, what’s that implicit bias that gets baked into people’s minds?”

The Singing Man, left, was honored by the Coretta Scott King Award in 1995 and The Village of Round and Square Houses, right, was honored by the Caldecott Medal in 1987. (Emileigh Harrison)

In many cases, said the professor, that pattern extends to adult characters that authors want to depict as moral or upstanding. Some books, for example, dilute Martin Luther King Jr.’s chocolate complexion to a light brown or beige, she said.

Whether by conscious choice or implicit bias, some children’s books lighten the skin tones of characters meant to be seen as moral or upstanding, such as Martin Luther King Jr. (Amazon Bookstore)

“We live in … a world that still sends the message that to be closer to white is to somehow be at an advantage,” Sharon G. Flake, author of the award-winning book The Skin I’m In, told The 74. “The whole notion that you are seen as … more valuable, more beautiful if you are lighter.”

The stories children read, said Flake, shape how kids come to understand the world and their place within it. Giving birth to an African-American daughter with a darker complexion inspired her to write a book featuring a dark-skinned Black girl as the protagonist to remind her child that she’s brilliant and beautiful.

Sharon G. Flake was inspired to write a children’s book with a dark-skinned girl protagonist after giving birth to her own daughter. (Sharon G. Flake)

“If you’re always left out of the story, then you start to think that you’re not important,” said Flake. But the power of books to reframe those societal messages, she added, is “huge.”

“When you’re able to read a book that actually does represent you, … you feel seen,” Edith Campbell, librarian at Indiana State University, told The 74. “You connect with it in a different way.”

But despite trend-setting titles, authored by Flake and many others, the children’s literature genre still has “a really wide gap in [racial, color and gender] representation,” said Adukia.

The dataset her team analyzed includes every children’s book published in the past century that won one of 19 different awards. Even from 2010 to 2019, their figures show, Caldecott and Newbery winners saw upticks in the share of characters whose skin color fell into the lightest tone group. They also saw a modest increase in the proportion of characters in the darkest skin tone — though the share remains less than in books winning identity-based awards — and a reduction in the percentage of medium shades.

In 2018, half of children’s books depicted white main characters, while Black, Asian, Hispanic and Indigenous people led 10 percent, 7 percent, 5 percent and 1 percent of titles, respectively, according to numbers from the University of Wisconsin-Madison’s Cooperative Children’s Book Center.

(Cooperative Children’s Book Center, University of Wisconsin-Madison)

“There are more books written with animal characters than there are with children of color,” said Campbell, remarking on the 27 percent share of stories with non-human protagonists in 2018.

Edith Campbell (Highlights Foundation)

The librarian, who launched the We Are Kids Lit Collective to boost diverse summer reading options, said she would give the recent progress to increase racial, gender and ability representation in the genre a D+/C-.

“There’s so much work to do,” she said, pointing to a string of new rules in red states and districts across the country ostensibly meant to limit critical race theory that disproportionately restrict the teaching of books written by Black, Indigenous, Hispanic and Asian authors.

In addition to racial and skin-tone patterns, the UChicago and Columbia Teachers College research team also identified concerning trends in the portrayal of female characters in kids books. Girls and women, their data showed, were more likely to be represented in images than text. Out of all the award categories, those dedicated to representing female voices were the only group to have more words gendered as female than male, the researchers found, and that proportion amounted to only a slight majority.

“There may be symbolic inclusion in pictures without substantive inclusion in the actual story,” said Adukia. “It is really striking, this illustration that women should be ‘seen but not heard.’”

“I don’t think that [the imbalance between female representation in images versus text] is something that people necessarily are doing on purpose,” added co-author Emileigh Harrison, a Ph.D student at the University of Chicago. “But making this finding more visible might help those who are writing future books or publishers … think about it more carefully.”

Girls and women were more likely to be represented in images than text. (Adukia, Eble, Harrison, Runesha and Szasz via Brown University’s Annenberg Institute)

If those in the industry can turn the worrisome patterns in racial, skin color and gender representation around, the potential impact can be enormous, Flake believes.

“Books work a lot of magic and they do a lot of healing,” she said.

]]>