top of page

A Deeper Conversation on AI

“Generative AI” has been floating around everywhere. From social media to news outlets to everyday conversation, everyone is talking about this latest innovation. As a tech leader I hear it more than anyone–the genius of it and how to use it in the corporate world. 

 

However, while we talk a lot about how Chat GPT and others of its ilk are considered revolutionary, being able to generate content in the blink of an eye by being fed different images or text we forget to discuss the dangers of AI or have a true understanding of what a future with #AI might look like. 

 

I already see Chat GPT’s issues in the very fabric that it is made from—content! Particularly in what it is consuming. Since #ChatGPT works by being fed data or content from already existing artists and writers, some of those creatives have an issue with their content being used in ways they have not authorized. In fact, users of #GenAI programs may not even know that they may be violating the original creator’s content.  

 

One suggestion for solving this: invest in open-source LLMs to get a transparent view of what the machine is being fed to better understand how your Gen AI tools are being "fed" information. This transparency may help in curbing privacy issues such as how people’s private medical history was found to be used to create Generative AI content.

 

Our piece last week introduced Gen AI and discussed the issue of Generative AI being fed content. However, what’s worse than plagiarism is how celebrities’ and even regular people’s bodies can be deformed using these programs to show them in positions they never want to be seen in. I see this as part of a larger problem of GenAI tools falsifying information and, with how fast information moves on the Internet, leading to confusion and chaos. While it was always the case that whatever you put on the Internet could be used in ways you did not want, Generative AI compounds this issue to the point that it muddies even what the original copy was.

 

Another issue that ends up being overlooked is that these programs have very human biases. Machines are thought to be completely logical but if the creator has biases then the machine does too. Often the people making these machines look and act the same, meaning that they do not know the issues or viewpoints of other races and classes. This can lead to issues such as facial recognition having difficulty seeing darker skin tone. 

 

The darker implication here is that when we exclude certain demographics from being in the room where these machines are being built, we inevitably lead to their erasure in the future. In order to address these issues, being open minded while also being skeptical will allow for Gen AI to be used to its greatest potential. So instead of diving head first into using Gen AI, having a formal meeting (or twenty) is important to accurately discuss how the product will be used, what issues it could cause, and the skills and people needed to make sure it keeps working. 

 

We discussed in our previous issue about the problems with Gen AI from bias to its programming. However, far too often we cannot have these discussions properly because Generative AI, especially in the media, is treated simply as a shortcut. This fails to understand what Gen AI truly is–a new form of media. With new media comes possibilities and risks that we could never have known. With changing the media landscape and our communication skills, we need to be aware of how new Gen AI is. We need to make sure that any problems that arise are addressed and any opportunities presented are utilized. 

 

One big problem with Gen AI is that it cannot understand morality. At best it can only be programmed with rules of conduct. However, the difference between Generative AI’s rules of conduct and humans’ morality is that Generative AI cannot make exceptions to the rules it has been programmed into obeying. For example, if a person steals a loaf of bread because they were starving, a Generative AI program will still give the same sentence to them as any one who steals. This differs from a human who would have the flexibility to understand the context and empathize with the situation. The issue here is that Generative AI can predict patterns but it cannot understand exceptions to that pattern but human beings are only exceptions. This is not just an imaginary situation. A growing number of sentencing officials are using AI programs to predict whether or not someone will reoffend and decide on how long their sentence should be. Taking into account the bias problems makes trusting Generative AI on tasks dealing with morality and justice dicey at best and unethical at worst. 

 

AI also is very bad at figuring out what to filter out or what to say that is appropriate. Take for example, Tay AI which Google released onto Twitter to interact and learn from the users there. She started with friendly and cheerful messages about puppies but by the end of the day was tweeting out racist statements. Of course some AIs do have filters like Chat GPT. But Chat GPT does have the risk of being so censored which leads to boring content being produced. 

 

Chat GPT’s biggest fault though is that it easily gives false information or “hallucinate” as experts call it. Plus, due to the speed of the Internet this false information could easily spread. When this happens, who will be held responsible for AI going wrong? Right now the answer to that question is being answered in court with the results of these cases being important for cases of wrong or harsh sentencing or even administrative decisions made by AI. Who will be held responsible is a vital part of making sure that AI is not given the wrong kind of power. 

 

The bedrock of all of these issues is that Generative AI has knowledge but not understanding. To explain it more visually, imagine a girl named Mary who lives in a black and white room and does not experience any color since she grew up. Mary has, however, studied color theory intently and knows everything about every color, specifically red. She understands the dimensions and how it interacts with the world. One day, she sees an apple and sees the color red. Does her seeing it change anything? The answer to that question is what we are now seeing play out with Gen AI who is given all the knowledge of how to create without experiencing creation itself. 

 

However, across the business and tech world as I have been discussing these topics with other CIO in various industries, one question that we need to be able to answer is even if Gen AI could solve all the present issues, what about the workforce? While there will still be many jobs for humans to do like taste-making, programming, and even content creation, it is true some jobs will never come back. But there is no need to panic! As the technological and societal climate keeps changing, newer and newer opportunities will come and replace many of the jobs now considered safe bets. 

 

Another problem is that with all this progress–the increasing automation of everything–what if the world becomes a lonelier place to live in? We saw this with social media, where the thought of becoming interconnected faster and easier would lead to the opposite of the current loneliness epidemic. With everything including text being automated will fights even have real meaning on social media or will we simply be two robots arguing uselessly? Will social media change or even exist anymore? The idea of finality or ending or even things being lost has been done away with since the dawn of the Internet. Death itself seems to lose its grip ever slowly, with the idea and image of the person still living on even past their physical death. While it can be worrisome to think how our image may be used when we no longer can say anything about it, it also is reassuring. You, as well as history as a whole, will always be remembered, easily accessible to a future generation. 

 

The issues and problems of Generative AI were there long before Generative AI was even considered a possibility. To reverse progress now is a fool's errand as machines and humans are already permanently linked. 

 

Consider how many times you have panicked when you thought your phone was not on your person only to be relieved that of course it was. Because to go without your phone is like going without your arm–it is a new sense like your eyes, ears, or nose. You need it in order to perceive, even interact with the world around you. That realization can be terrifying, to recognize that you have less individuality than you thought, but then who we are as humans is always connected to others. In the age of the Internet, everyone is already so connected, even the experience and history of singular human beings is blending together. What may be a harder pill than even that realization to swallow is that machines will also be one of the senses used to experience the world. 

 

However, before we consider AI the new savior of humanity, we must understand that it cannot fix the problems of humanity. These remain, unfortunately, very human problems that need human solutions. Take inequality which will only be heightened by Generative AI with only the wealthy being able to make use of it. We cannot compute a utopian society but instead must do the grueling work of building one ourselves. 

 

However, understanding the utopia we are striving for can better guide us with Generative AI. One possible vision perhaps is having less people work. After all, not having to deal with bosses and having more time for family is really what automation was built for. Building families could increase the population in developed countries. Better yet it would free up time for people to experience being human rather than just a corporate machine. AI may not be the savior of humanity but perhaps it could return us to our once human selves.

 

 

​

Appendix

LLM-Programs that use code to create text content

  • Instagram
  • LinkedIn
  • YouTube
  • substack_logo_icon_249485
bottom of page