Press "Enter" to skip to content

AI Development in the Age of Crisis

Whether for, against, or neutral, the conversation around AI at times dominates the news cycle. We’re seeing models and services showing up in a wide spectrum of fields, from healthcare to content creation. Simultaneously, the entire world is grappling with climate change and it seems like regional conflicts are threatening to unleash another global war.

So, how far has this booming new industry really come, and what is its real value? There’s a lot going on with AI behind the scenes. It’s important for consumers to understand what they’re getting into, especially in chaotic times.

AI now

From top to bottom, AI’s issues are reflective of all the major trends in inequality and climate change that we have come to expect from the tech sector. In its production, AI technology relies on minerals that are often mined destructively, in ways that drive some of the world’s worst human rights violations. In its use, AI consumes enormous amounts of electricity from fossil fuels to run and fresh water to cool. As components are thrown away, they contribute to the global waste crisis in addition to releasing dangerous substances such as mercury and lead.

What’s more, we have seen warnings about the limitations of much-lauded Large Language Models (LLM) come to fruition. Google in particular was seen as a strong competitor in AI development until the limitations of the LLM and Gemini’s mistakes started reaching the media. More and more, we’re seeing how LLM models have particular and limited uses. These limits don’t necessarily fit the goals and dreams of engineers, tech hobbyists, and their investors.

LLM missteps

It’s no secret that Google’s AI summary generator has been plagued by issues with incorrect or misleading information. For a moment, the giant search engine’s summaries popped up with the majority of searches – much to the surprise and disappointment of many users.

More recently, the summaries appear in only 15% of searches, compared to 84% when it first tested the software.

After the mistakes went viral, Google stepped forward to admit to the problems (while addressing hoaxes among the reported mistakes). What else could they do? Screenshots made the rounds of social media and news media; it wasn’t a secret they could hide from.

Some, however, have pointed out subtler issues with Google’s overviews. Slightly incorrect or misleading information that isn’t as easy to pick out because it isn’t as obvious. By itself, one slightly incorrect summary of the President’s religious views or how to make pizza isn’t much. Take that one mistake and multiply it to the scale Google’s millions of users. The effects might be subtle at first, but over time the scale of the misinformation could lead to unforeseeable consequences.

Potential medical applications of the LLM

LLM models can apparently pass major tests in the medical field. But integrating them into an actual clinical workflow has more challenges than promises. Two studies found that LLMs developed for diagnostic purposes performed worse or significantly worse than physicians. A third suggested that the hype around ChatGPT and Alphafold has led to critical gaps in evaluative criteria that are obscuring its real potential and limitations.

However, another study found that the LLM shows promise when it comes to matching potential patients to clinical trials. The task here is to identify patients whose medical information aligns with the constraints and reach of the project. When it comes down to it, matching patients to clinical trials is just detailed pattern matching. Potentially mind numbing work for a human could be made more accurate by sophisticated pattern recognition software.

This issue does pale in comparison, however, to Gemini’s issues with race. Google’s software came under fire not too long ago for its missteps regarding race. It regularly put out all-white examples of innocuous prompts such as “surgeons in a serious discussion,” among other mistakes. Furthermore, the issue of race and AI runs deeper in that it happens to be difficult to find unbiased data to work with in the first place.

When an algorithm is unable to understand the information it’s processing and reproducing, it’s truly no wonder that it accidentally ends up reinforcing typical political and cultural fault lines such as race.

The plateau of the LLM

Of course, AI development issues are not exclusive to Google. Large Language Models in general have come up against severe limitations. They’re essentially enormous collections of data attached to algorithms that sort the information according to “system prompts”. Part of the issue with Gemini seems to have been that the system would change its prompts based on user interactions. At first blush, it might seem like it could develop into a true cognitive process which functions much like a human mind.

But this still isn’t true thinking. LLM in general isn’t much more than pattern recognition and repetition (in Gemini’s case, one where its rules keep changing).  Solving the problem of how to prevent an LLM from regurgitating nonsense is no straightforward or easy task for anyone. Indeed, a study by Apple engineers reported by Ars Technica suggests that pattern recognition is the limit of such models. The results of the study lend credence to the statement that LLMs won’t be able to replicate true reasoning anytime soon if they’re merely repeating the reasoning they’ve seen before.

Being honest about the limitations of the LLM from the beginning might not have landed Google in such a huge public mess. Some have pointed to OpenAI and DALL-E2 for being upfront about their potential bias issues based on the datasets themselves. Finding a dataset without bias is undoubtedly an uphill task (as is finding honest leaders in large corporations, it seems).

Perhaps the original hype merely overestimated what an LLM is capable of doing. While there’s plenty of use for advanced pattern recognition, it seems unlikely LLMs will be capable of more than that.

Of course, there’s more than one kind of AI in development. To try to bridge the gap between where we are and where we want to be with AI, there are many competing models in development or on the market. AI’s issues, unfortunately, extend far beyond the limits of its current technology.

Beyond the models

Researchers at The Princeton Review have identified several key problems in AI development:

  • Bias – the potential of AI applied to medical systems regurgitating or even expanding existing disparities in health treatment and outcomes. We’ve seen this with Gemini’s race issues.
  • Privacy – AI uses enormous amounts of data, both public and private, and the potential for misuse. I speak more to this below.
  • Accountability – AI tends to have informational “black boxes” to hide the secrets of their development. But this creates areas where a lack of transparency can have an effect on users’ ability to hold these systems accountable for missteps. As of yet, there seems to be no clear solutions to handling transparency versus the protection of trade secrets.
  • Security – Transparency issues are compounded by AI’s use of vast amounts of personal and private data. This leads to security issues. As the algorithms become more complex, they also become more vulnerable to cyber attacks. Such attacks could potentially expose the data of people who weren’t even consulted about its use. Which ties into the issue of privacy, mentioned below.
  • Job displacement – The widespread integration of AI across industries could lead to further automation. Automation that could push out existing workers and lead to greater income disparity. We are already seeing AI pushing at the edges of rideshare and taxi services with companies like Cruise and Waymo. Add general transportation to these numbers, and we’re looking at millions of people facing potential job displacement via transportation automation.

Job displacement

Particular jobs, namely bus, taxi, and truck driving jobs, are characterized by a lack of college requirements combined with salaries surpassing minimum wage. This means that these jobs primarily attract older workers, typically with fewer educational attainments and more limited transferable skills. That puts these populations at increased risk for under-employment or unemployment in the wake of automation.

However, other job types are anticipated to rise as the new industry continues to develop, mostly tech-driven or data-related. And access to tech-centered jobs will be difficult for drivers without higher education or tech-related skills.

Those working on transportation wages often support entire families within and outside the United States. Expecting them to smoothly transition into new job sectors that usually require a minimum of a bachelor’s degree without adequate support seems dubious at best. While it’s certainly possible on an individual level, we’re talking about the displacement of a ubiquitous industry. Pushing automation too fast without supporting workers in transitioning to new opportunities could cause an economic depression for the working poor in urban centers.

Creative work

On top of that, artists around the world have been raising the alarm since at least the fall of 2022 about the ethical abuses of developing AI technologies. In particular, digital and visual artists have been at the forefront of the discussion about the art that feeds these machines. These AI programs typically don’t manage much more than copying their art outright without attribution, acknowledgment, or compensation.

When it comes to the good we could see from AI in art, there’s a potential here to democratize creation. By flattening access to creation based on ability or resource access, more people could express themselves more confidently. However, a lot of the time the reality of this situation comes down to replacing paying artists for their time and expertise by relying on AI. The very same AI that may have also used the work of an artist who might have been hired for that job, without their consent in its development, representing a double loss.

Using AI instead of hiring designers, we could grow general artistic confidence, something often lost as we enter adulthood. But so far, we have only seen this happen at the expense of those making art for a living.

Some argue that computers can’t actually replace a human eye for aesthetics. The argument indicates that artists have no reason to worry. This position seems somewhat short-sighted given the already stressful levels of AI-based plagiarism that has proliferated with the use of image and video generators.

There must be a way forward that allows professionals to utilize AI that doesn’t ultimately steal ideas or work from actual human beings.

Privacy, security, and violence

Another issue that needs to be on the forefront of AI development is its use in conflict zones. In particular, multiple organizations have reported on the use of digital tools in Israel’s genocide in Gaza. The Israeli military is using AI in conjunction with surveillance technologies and other digital tools to choose targets in Gaza. With a reputation for shooting children, the issue could not be more urgent as we discuss current trends and the future of Artificial Intelligence.

Why AI is not ready for war

According to Al Jazeera, Israel’s untested AI targeting systems identified as many as 37,000 targets. The data used for this decision making process came from systemic surveillance of Palestinian residents. Additionally, inaccurate approximations from faulty data further exacerbate the human rights abuses of Israel’s military campaign.

Israel’s current use of AI relies on data compiled in violation of Palestinian rights to privacy as indicated by international human rights law. Such use ties into an entire system of human rights violations referred to internationally as apartheid.

The use of AI tools in military contexts relies on invasive surveillance and digital dehumanization. This is a trend we’ve seen from Obama-era drone strikes that caused PTSD among its operators. When human beings are reduced to data points or specks on a screen, it’s easier to make the decision to harm them, up to and including lethal action.

The global climate

When considering the current and potential impact of AI, we also have to look at the environment. AI is seen by many as potentially streamlining efforts to restore ecosystems and track climate change. Some see this as empowering us with a greater ability to detect and find solutions to existing and future climate problems.

Right now, however, it seems we struggle even to gather enough data on AI’s own impact on the environment. Rare earth minerals required to run AI, such as coltan, are typically mined destructively. They’re also mined in areas where conflict over those minerals drives war, displacement, and genocide. On the other end, electronic waste from AI technology includes hazardous substances like mercury and lead.

AI also requires vast amounts of water to cool its electronic components. This is a huge issue when a quarter of living human beings lack adequate access to clean water and sanitation, including millions of people in the United States. Not to mention the power requirements that are primarily fulfilled with fossil fuels.

Moving forward, we need support from governments in regulating the use and impact of AI. This includes requiring companies to create more efficient AI that reduces needed and wasted resources. They can do this through reducing their energy consumption, recycling their water and components, and switching to green energy sources.

It’s still anyone’s guess as to what model will dominate in the future. It’s equally important to evaluate the environmental and social impacts as we continue to live through both ongoing crises and tech development.

Personal reflection

What do I want? I want an AI for the people – AI that helps allocate energy, food, and water according to need, that helps companies predict consumption in a way that reduces production and after market waste, that provides critical backup to doctors and nurses, and greater tools for artists, and more. The potential of AI cannot be quantified; that’s exactly what makes it so alluring in the first place.

But that’s not really what we have, which makes me question our pursuit of it. Exactly as people questioned Elon Musk’s plans for reaching Mars or the US push to the moon: what is the use if it’s not for the people? If anything new we do reproduces familiar inequalities and contributes to problems we’ve been facing for decades just to line the pockets of a few already rich people, why are we doing it?

I think AI could be a great boon to the world and its people under tight regulatory conditions and strong ethical considerations. Thus far, there’s not a lot of agreement or certainty on just how AI regulation can or should proceed. Personally, I’ll be avoiding it as much as I can until we do.

Check out:

AI Extensions: Convenience, Caution, and Consideration

The ‘New Normal’: Climate Change and Its Link to Canada’s Worst Wildfire Season

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *