top of page
  • Facebook
  • Twitter
  • Linkedin

Concerns About AI

Overview

AI promises to be transformative for many domains (check out the use cases that What the Tech? participants dreamed up!), but it is also a source of anxiety.

 

We identified a variety of concerns–as well as some nuance–in how the general public perceives AI.


These concerns were identified via:

  • Memos to the City of Boston’s Chief Information Officer by undergraduate students based on interviews with stakeholder groups;

  • Limitations to use cases articulated by high school students;

  • Testimony by Community Advocacy Fellows to members of Boston City Council;

  • A survey of Boston-area professionals conducted by high school interns. 

Digital Equity and Education

Bottom Line:

Everyone is affected by AI, but not everyone knows what is is. All WTT? participants were alarmed at how few people understand AI even as it pervades all parts of society. They recognized that education for all ages is urgently needed before communities can meaningfully deliberate any of the other concerns listed here.

From the Research:

  • One Community Advocacy Fellow highlighted the need for public education on AI to City Councilors, stating that “I had never heard of AI before this program. … How am I supposed to parent my teenage children if this is everywhere and I don’t know what it is and how it works?”

  • Similarly, one high school student commented in a weekly reflection that, “…While there are many opinions on the controversial upbringing of AI, a general pattern I noticed was that not everyone is familiar with AI, and everyone is at different levels of understanding on the topic…”

  • Undergraduate interviews with representatives of cultural groups similarly noted that “[AI] can only be beneficial to those who have the means to use it. … We found that many, regardless of cultural background, are not aware of what AI truly is or what it can be used for.” 

Accuracy and Reliability

Bottom Line:

What happens when AI generates incorrect information and "hallucinations"? False assertions and “hallucinations” was one of the greatest and most consistent concerns

From the Research:

  • Undergraduate memos referenced the persistent “need for human oversight and verification” (College-Focused) because “as of now, [AI] cannot serve as a replacement for the diagnosis of patients, or the accurate selection of drug targets,” or other applications.

  • Accuracy was both described as a limitation for the uses cases proposed by high school students, especially in medical contexts and ranked as the highest concern for the use of AI by Boston-area professionals (avg. 4.75 on a 5-point scale).

  • Sometimes AI can be too accurate, though, by remaining faithful to historical patterns. The same memos  noted that, “Many [college students] indicated that they felt like the algorithm [on social media platforms] was over-tuned to their preferences, resulting in homogenous feeds that reduce spontaneity and discovery.” Likewise, high school students were concerned that AI in the arts could constrain creativity by limiting artists to established designs and techniques.

Bias

Bottom Line:

AI algorithms are prone to unequal treatment of racial minorities and disadvantaged populations. The potential for AI algorithms to be skewed or non-inclusive in how it interacts with and treats racial minorities and disadvantaged populations is a prevalent issue.

From the Research:

  • Bias was the second-highest concern among Boston-area professionals (52% of respondents indicated it was one of their top two concerns).

  • Representatives of cultural groups interviewed by WTT? college students cited “racial bias…due to a non-diverse training set. This is detrimental to technological inclusivity as it could prevent certain groups of people from taking full advantage of what AI has to offer.” 

  • High school participants in WTT? also described bias as a major concern for their use cases.

Job Displacement

Bottom Line:

AI could replace many skilled jobs, with some professions more vulnerable than others. The fear that as AI becomes more and more powerful, more and more workers will be at risk of displacement. This fear spans a wide range of jobs and skill sets. 

From the Research:

  • WTT?’s high school participants were especially concerned about job displacement, who saw the new technology as replacing professionals in music and the arts, transportation, and more.

  • In contrast, college students interviewed by undergraduate WTT? participants “[were] not concerned about being replaced by [AI}” but instead anticipated that many jobs would shift to “managing AI” because the technology lacks “a distinguishable ‘human touch.’” Some even expressed “optimism for growth in industries and new job positions.”

  • Boston-area professionals responding to the WTT? survey saw job displacement as the least pressing concern among those listed here (an average of 2.24 on a 5-point scale). 

Privacy and Security

Bottom Line:

There are deep misgivings about how AI collects, uses, and protects data about each user and interaction. How AI collects, uses, and protects data about each user and interaction has been a large concern in society.

From the Research:

  • Boston-area professionals reported that this was a major issue with AI systems (avg. 4.63 on 5-point scale). 

  • Undergraduate memos noted “data privacy and HIPAA violations” as major issues and urged Boston’s CIO to “prioritize privacy and personal data protection, giving individuals control over their data usage.”

Environmental Risks

Bottom Line:

For all the amazing things AI can and may one day do, one fact remains: it requires a lot of water to work. As droughts increase worldwide due to climate crisis, AI's impact on global water sources is becoming more and more of a concern. If AI is here to stay, emphasis on responsible usage must be part of the discussion.

From the Research:

  • Focus group respondents that participated in WTT? recalled the environmental issues being mentioned during group discussions

  • Undergraduates were concerned that AI tools use a lot of energy and resources. They warn it should not be overused to the point where it really deteriorates our environment 

Contact Us

Boston Area Research Initiative

Northeastern University
1135 Tremont St. 

Boston, MA 02120

Tech Goes Home

131 Dartmouth St

3rd Floor

Boston, MA 02116

© 2025 by Boston Area Research Initiative. Powered and secured by Wix

bottom of page