OpenAI’s commercial release of API raises serious questions about AI misuse

openai’s-commercial-release-of-api-raises-serious-questions-about-ai-misuse

The organization previously aired reservations related to potential misuse of the text-generation model. Now, the company is taking the AI commercial.

artificial-intelligence-2.jpg

IMAGE: iStock/MaksimTkachenko

Originally, the artificial intelligence (AI) development and research organization, OpenAI, was founded as a nonprofit with the ambitious mission of ensuring artificial general intelligence would benefit all humanity. Since then, much has happened, starting with original founder Elon Musk leaving OpenAI’s board in 2018.

In July of 2019, the narrative changed yet again after the company received a $1 billion investment from Microsoft. The company exists today as a “capped-profit” organization. Most notably during this time, the company developed a text-generating language system it chose to not release “due to our concerns about malicious applications of the technology,” then subsequently released said system.

Now, OpenAI is taking it a step further. Earlier this month, the company announced that it was releasing its latest text-generation system as a commercial product.

Potential malicious applications

At first, there were concerns that OpenAI’s text-generating system was simply too good at churning out convincing text. These capabilities in the wrong hands could enable a wide range of nefarious activities. This includes automating the production of misinformation campaigns and bogus news reports peppered with misleading information. So, why did OpenAI choose to release the technology?

As part of the announcement, OpenAI created a FAQ page to help explain its rationale and mitigate concerns. One of the key arguments for the commercial release centers around the company’s charter and, well, capitalism.

“Ultimately, what we care about most is ensuring artificial general intelligence benefits everyone. We see developing commercial products as one of the ways to make sure we have enough funding to succeed,” OpenAI explained in a recent blog post about the release.

Aside from the monetary component, the company hopes the released API in the long-term helps smaller organizations access advanced AI systems. For now, OpenAI launched the text-generation product via an API available only in a private beta.

“The API model allows us to more easily respond to misuse of the technology. Since it is hard to predict the downstream use cases of our models, it feels inherently safer to release them via an API and broaden access over time, rather than release an open source model where access cannot be adjusted if it turns out to have harmful applications,” according to the OpenAI release.

SEE: Building the bionic brain (free PDF) (TechRepublic)

Responsible innovation

The company has said point-blank that it will terminate access to the API for any observed use-cases with the potential to “cause physical or mental harm to people, including but not limited to harassment, intentional deception, radicalization, astroturfing, or spam.”

Users with access to the private beta have been vetted individually to minimize risks of misuse. At any rate, this brings up ethical questions surrounding responsibility and innovation.

“I’m not of the type who believes that technology is just a useful tool, it can go either way. That’s a naïve way of thinking about technology,” said Hamid R. Ekbia professor of Informatics, Cognitive Science at the Luddy School of Informatics, Computing, and Engineering.

In recent weeks, IBM, Microsoft, and Amazon have all pulled their facial recognition systems from the market or banned law enforcement agencies from using them. As these technologies have major flaws related to false positive identifications; particularly among minority populations.

At the bleeding edge of AI innovation, we’ve seen companies prematurely roll out the latest developments with appalling results. Needless to say, once a new technology is introduced into the real world it has the potential to function in unforeseen ways, bringing its biases from the research laboratory into reality. 

When it comes to the release of a potentially detrimental AI system, what is perpetually pushing the envelope of development?

“In general, my thinking about AI systems, such as this, is that they are focused too much on what machines are capable of doing as opposed to what we need as societies and that’s what drives a lot of these innovations,” Ekbia said.

The cost of innovation

In recent years, AI agents have been tapped to create original content ranging from sports reports to financial briefings. There are numerous industries that would certainly be interested in OpenAI’s advanced text-generating system.

The website Talk to Transformer allows you to utilize OpenAI’s GPT-2 text-generation model. When fed a few words, the application begins to continue the user’s prompt spewing out line after line of text. The potential and commercial value are easily recognizable; especially with further refinement in the years ahead.

Nonetheless, the idea of mass-production and the commercialization of common language as a commodity brings up more complex, less-quantifiable questions.

“If we push for this standardization of language by computers, so the computers can understand things, then we are going to eliminate, we are going to lose all of that richness, all of that wealth, that diversity, heteroglossia of language and that is my biggest fear. This is not romanticizing about the past. This is not nostalgia. This is just about the nuances of human life,” Ekbia said.

If machines are calling the shots on language production, will we be more likely to pose questions to our digital assistants in the language of our digital assistants? Will we become more likely to text and speak to one another in the language of machines? 

“If they push for the standardization of language so that computers can follow us and interact with us like Alexa and this and that, then we are going to lose that and that is not just the losing of language, that is losing many dimensions of human life. We don’t want to be robots I guess,” Ekbia said.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

An “existential threat”

For now, OpenAIs mysterious wordsmith remains locked away in a digital cage, and tickets to see the big show have been reserved for a select few. However, it wasn’t long ago that Musk called AI the biggest existential threat facing humanity. Ekbia sees the existential threat differently than the dystopian apocalyptic scenario routinely presented.

“It can be a threat to humanity only if we play along, not a threat in the existential sense of the term, not in the sense that we are going to be slaves to machines, but in the sense that we are going to change our behavior, our ways of life,” Ekbia said.

Data, Analytics and AI Newsletter

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence.
Delivered Mondays



Sign up today

Also see