Robot hand meets a human hand with a futuristic image of AI in the background

10 Risks of Using ArtificiaI Intelligence in Your Subscription Business, Part 1

AI has significant benefits, but there are inherent risks that subscription companies need to be aware of.

Artificial intelligence, or AI, is evolving rapidly with new deals, partnerships, applications and headlines popping up daily. It is a powerful tool for increasing productivity, identifying fraud, enhancing discoverability, quality control, and energy management among thousands of other uses. However, using AI comes with notable risks that subscription companies need to be aware of when using artificial intelligence to operate and manage their businesses. In this three-part series, we will look at some of the risks of using AI, how to increase awareness of those risks, and how to mitigate them to harness the incredible power of AI responsibly.

In the last year, there have been several high-profile copyright cases related to generative AI and how a work (e.g., artwork, books, etc.) was created. There are three primary concerns, according to AI Multiple:

  • Whether AI-generated works are eligible for copyright protection
  • Ownership of copyright
  • How copyrighted works are being used to train algorithms

Are AI-generated works eligible for copyright protection? The short answer – it depends. The US Copyright Office says their review and evaluation of copyright in machine learning dates back to 1965. The Office said whether or not something can be copyrighted is a question of human authorship or creation and whether it was created all or in part by a machine.

The US Copyright Office shared two cases its Review Board had evaluated last year involving AI-generated work. In one case, the Office’s Review Board refused to register a work because it lacked creative input or intervention from a human. In the other situation, a registration application was for a graphic novel with text by a human author and illustrated using AI-based Midjourney. The Office said the text and arrangement of text and images was copyrightable, but the AI-generated images themselves were not.

Earlier this month, in a high-profile case, the US Copyright Review Board reviewed Jason M. Allen’s second request for reconsideration of an award-winning two-dimensional piece of art. The piece was created using Midjourney. Though Allen further manipulated the image, it wasn’t enough for the Review Board to change their ruling. The artwork cannot be copyrighted as submitted, meaning that others can use the work. The piece does not have copyright protection.

Last year, Allen used generative AI to create the image at the subject of the case, “Théâtre D’opéra Spatial,” a digital image that looks like an oil painting. The majority of the work was created using Midjourney, a generative AI tool that creates images based on text prompts. Allen won a Colorado State Fair contest for his work, and the controversy began. Allen used “at least 624” text prompts before Midjourney created the image he envisioned, reports Colorado Public Radio.

“Art doesn’t create itself, and as much as you might want to will a paintbrush to create a painting for you, it’s not going to,” Allen said in an interview. “And right now, we’re just talking about it being [created by] a much more complex system, a much more complex tool, but it is multimodal by nature, which means it requires human interaction in order to function.”

Copyright © 2023 Authority Media Network, LLC. All rights reserved. Reproduction without permission is prohibited.

Who owns the copyright when text or images are generated by AI? Copyright law states that only humans can be granted copyrights. Work created by artificial intelligence instead of a human cannot claim copyright protection. The distinction seems to be whether there was human intervention in the creation of a work. A book, for example, may receive copyright protection if AI assisted in the use of its creation, but was not generated entirely by AI. For example, a graphic novel and the accompanying story can be copyrighted if the author used AI to assist in developing the book. However, if the images were generated with artificial intelligence, the images cannot be copyrighted.

Last week, authors John Grisham, Jodi Picoult, George R.R. Martin, David Baldacci and others filed suit against OpenAI for “systematic theft on a mass scale,” according to the Associated Press.

“It is imperative that we stop this theft in its tracks or we will destroy our incredible literary culture, which feeds many other creative industries in the U.S.,” said Authors Guild CEO Mary Rasenberger in a statement. “Great books are generally written by those who spend their careers and, indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”

Why does this matter?

Copyright can impact a range of subscription companies. Journalistic reporting that uses generative AI to produce work goes against the editorial standards of many companies. For example, the Associated Press issued its own guidelines on when generative AI can be used. The AP prohibits the use of generative AI for news stories or images. When journalists receive information and images from sources, they should confirm data and sources and do reverse image sources to ensure they are not using an image created via generative AI. There are huge risks for subscription companies that rely on generative AI too heavily, they don’t fact-check the generated material, or the AI-generated work violates copyright.

At the same time, the Associated Press has formed a partnership with OpenAI, the developer of ChatGPT.

“Generative AI is a fast-moving space with tremendous implications for the news industry. We are pleased that OpenAI recognizes that fact-based, nonpartisan news content is essential to this evolving technology, and that they respect the value of our intellectual property,” said Kristin Heitmann, AP senior vice president and chief revenue officer, in a July 13 news release. “AP firmly supports a framework that will ensure intellectual property is protected and content creators are fairly compensated for their work.” 

“OpenAI is committed to supporting the vital work of journalism, and we’re eager to learn from The Associated Press as they delve into how our AI models can have a positive impact on the news industry,” said Brad Lightcap, chief operating officer at OpenAI.

Source: Bigstock Photo

No firm regulatory environment

Though artificial intelligence has been around since the 1950s, there are no solid rules and regulations that govern its use, though there are some regulatory bodies trying to rein things in. For example, the US Copyright Office is asking for public input to determine if regulation is needed. Considering there are several, high-profile class action suits suing OpenAI and other AI companies for using copyrighted works for AI training, it seems there is a need for oversight.

“As concerns and uncertainties mount, Congress and the Copyright Office have been contacted by many stakeholders with diverse views. The Office publicly announced a broad initiative earlier this year to explore these issues. This Notice is part of that initiative and builds on the Office’s research, expertise, and prior work, as well as information that stakeholders have provided to the Office,” the US Copyright Office says.

The White House is attempting to provide some structure for AI as well. At President Joe Biden’s direction, the federal government is working on a Blueprint for an AI Bill of Rights that includes five principles to guide the design, use and deployment of automated systems to protect Americans in the use of AI.

“The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process,” said the Office of Science and Technology Policy of the blueprint.

Google has its own thoughts on responsible AI practices which include recommended practices like using a human-centered design approach, selecting multiple metrics to train and monitor AI models, examining raw date, understanding the limitations of a system, and continued monitoring and updates after deployment. Google also addresses fairness, inclusion, representative data sets, performance analysis, privacy and more.

On the other side of the Atlantic, the European Commission is a few steps ahead of the United States. The EU will be regulated by the AI Act, the world’s first comprehensive AI law. The Commission has been working on legislation since early 2021.

Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes,” the Commission said in a June blog post. “Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.”

One key area for the AI Act is to consider the different levels of risk: unacceptable risk, high risk, generative AI and limited risk. Though the AI Act is still being negotiated, the Commission hopes to finalize it by the end of 2023.

Copyright © 2023 Authority Media Network, LLC. All rights reserved. Reproduction without permission is prohibited.

European Union flag folded on desk next to gavel
Source: Adobe Stock Photo

AI customer service can’t replace humans entirely

Using artificial intelligence as a customer service tool can help address a subscription company’s customer service needs, especially if staffing and resources are limited. For example, some companies use chatbots to simulate human conversations by answering commonly asked questions. In a similar fashion, integrated AI can be used to create content for a website’s help pages or FAQs based on questions about product features, pricing and different plan options. Generative AI for customer service can also facilitate orders, exchanges and returns, direct users to other resources for assistance, reduce costs and increase efficiency. Many subscription-based companies use such chatbots, including Amazon, Constant Contact, Lyft, Sephora, Fandango and Spotify.

According to Shopify, more than 40% of business-to-consumer businesses and nearly 60% of business-to-business companies use chatbots – software applications that use AI and natural language processing (NLP) – on their websites to help customers find the information they need. Chatbots are also used in mobile apps, on social media, and to provide 24/7 customer service.

“AI isn’t a replacement for human jobs – it’s a resource that requires humans to set up and prompt it to do specific tasks,” Shopify said.

Shopify hit the nail on the head. AI is a great tool to expand a subscription company’s ability to serve customers, but it cannot replace humans, emotional intelligence, or decision-making beyond specific parameters. For example, if a customer contacts a call center and gets put into a customer service queue, they may never get to speak to a real person. This can be very frustrating, creating a negative customer experience.

If the customer succeeds in getting through the queue, the customer can state their problem, and AI will give the customer a response based on keywords, past history with the customer, and other information it has learned during the AI training process. What it can’t do is respond with empathy when a customer can’t make their payment due to a job loss or family emergency. AI also cannot handle complex situations that should be referred to a human.

One risk is the lack of human interaction and connection. If subscription companies can provide the option to speak to a real person in certain situations, customer service and experience are improved, helping with retention and potential subscriber referrals. If the customer has a bad experience (e.g., can’t speak with a human when needed), however, the opposite is true.

Another risk of using AI for customer service is providing inaccurate, biased or false information. You know the saying “garbage in, garbage out”? That’s particularly true of AI in customer service situations. AI can only produce information that it possesses and that it has been trained on, TechTarget explains. If the data is inaccurate, outdated or incomplete, flawed or false information will be conveyed to the customer. And don’t forget glitches and bugs. These can cause downtime and mistakes.

There are infinite applications for AI in the customer service realm, and AI chatbots can be trained to provide personalized service based on a customer’s history and preferences. But they aren’t foolproof. They must be tested and refined to provide the best customer service possible. They are a great tool when used properly.

Customer uses chatbot to access customer service prompts of a subscription company.
Source: Adobe Stock Photo

Lack of human oversight

The lack of human interaction and connection is a shortfall of artificial intelligence as is human oversight. This is important in customer service when complex, emergent or emotional issues arise, but human oversight is critical in other applications at subscription companies. Gannett understands that all too well. The massive media company recently put a pause on its AI experiment of using AI service LedeAI, says CNN, when subpar reporting outed the Columbus Dispatch, a Gannett-owned newspaper.

The newspaper reported high school sports stories using remarkably similar wording and omitting key details that a sports reporter would not miss. One article that was later shared on social media even included the text prompts that were not completed. Though the article on the Columbus Dispatch website has been updated, the original version was captured by the Wayback Machine web archive. To Gannett’s credit, the byline is LedeAi not “Joe Sportswriter.”

Source: Wayback Machine

“We are continually evaluating vendors as we refine processes to ensure all the news and information we provide meets the highest journalistic standards,” the Gannett spokesperson said.

In a September 15 article for the Harvard Business Review, Joe McKendrick and Andy Thurai assert that AI isn’t ready to make “unsupervised decisions.” While AI can often make sound decisions, it doesn’t have the capacity to consider ethical, moral and other human factors to guide decision making.

“The bottom line is, AI is based on algorithms that respond to models and data, and often misses the big picture and most times can’t analyze the decision with reasoning behind itIt isn’t ready to assume human qualities that emphasize empathy, ethics, and morality,” McKendrick and Thurai wrote.

An unhealthy dependence on AI

As usage of AI has grown and we have become more productive and efficient, we are more reliant on artificial intelligence. Rather than reading and researching ourselves, we use AI text or audio prompts to create essays, cover letters, reports, emails, articles, white papers and other content. Some like MEC workshop assert that this reliance on AI is dumbing us down as we grow more dependent on technology to do our thinking for us.

MEC workshop is not waiting for the sky to fall or the world to be taken over by robots. They believe (or hope) it is more likely that we will learn to use artificial intelligence responsibly, ethically and transparently in a way that balances it with human intelligence. They encourage us to engage our brains, our staff to use AI “responsibly and rationally,” and take advantage of AI’s strengths without losing who we are.

“The increasing reliance on AI is a double-edged sword. While it can make our lives easier and more convenient, it also comes at a cost. However, by balancing using AI to our advantage and continuing to engage our brains in our tasks, we can help ensure that our relationship with technology remains positive and productive. Only by working together can we create a future in which humans and AI can coexist in harmony rather than one in which robots and machines dominate all aspects of society,” MEC workshop said.

Bernard Marr, a Forbes contributor, wrote an article about the 15 biggest risks of artificial intelligence. He agrees that balance must be found to minimize the risks that come with the use of AI.

“Overreliance on AI systems may lead to a loss of creativity, critical thinking skills, and human intuition. Striking a balance between AI-assisted decision-making and human input is vital to preserving our cognitive abilities,” Marr wrote.

Artificial intelligence in humanoid head with neural network thinks. AI with Digital Brain is learning processing big data, analysis information. Face of cyber mind. Technology background concept.
Source: Adobe Stock Photo

Next week, in part 2 of our AI series, we’ll discuss more risks of AI including data cybersecurity and privacy concerns, job loss, bias and misinformation and manipulation.

Copyright © 2023 Authority Media Network, LLC. All rights reserved. Reproduction without permission is prohibited.

Up Next

Register Now For Email Subscription News Updates!

Search this site

You May Be Interested in:

Join us to master the latest subscription business strategies, from emerging payment trends

Log In

Join Subscription Insider!

Get unlimited access to info, strategy, how-to content, trends, training webinars, and 10 years of archives on growing a profitable subscription business. We cover the unique aspects of running a subscription business including compliance, payments, marketing, retention, market strategy and even choosing the right tech.

Already a Subscription Insider member? 

Access these premium-exclusive features

Monthly
(Normally $57)

Perfect To Try A Membership!
$ 35
  •  

Annually
(Normally $395)

$16.25 Per Month, Paid Annually
$ 195
  •  
POPULAR

Team
(10 Members)

Normally Five Members
$ 997
  •  

Interested in a team license? For up to 5 team members, order here.
Need more seats? Please contact us here.