Home » Business Topics » AI Ethics

ChatGPT Watermarking: What’s Really Human?

  • Tom Taulli 
Abstract digital human face.  Artificial intelligence concept of

“Is it real or is it Memorex?”

This was an effective tagline for commercials in the 1980s. Memorex sold audio cassettes and the company claimed its technology was much better than a typical recording.

Fast forward to today and a company like OpenAI could have a similar ad campaign. Its tagline could be: “Is this AI or human?”

This is the dilemma many people are facing with OpenAI’s hugely popular ChatGPT. The system’s content creation often seems like it has come from a human.

The implications are far-reaching. Let’s face it, high school teachers’ lives will have gotten even tougher. Are student essays the result of clever AI?

Even more serious, systems like ChatGPT could be leveraged for nefarious purposes. Just some examples include creating phishing messages at scale or launching a misinformation attack against another nation.

As for OpenAI, it is looking at ways to deal with potential issues. After all, the mission of the company is: “OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.”

Then what to do about the human vs. AI problem? OpenAI is exploring various approaches. But it looks like watermarking is a priority. This is according to a recent lecture from computer scientist, Scott Aaronson, who is serving as a guest researcher at OpenAI. His focus is an AI safety.

The watermarking approach is somewhat misleading, though. This is not just about slapping a tag across the content. Instead, for OpenAI, it involves using sophisticated cryptography.

Consider that GPT technology – which is a transformer neural network model –tokenizes words, punctuation markets and even parts of words. When it is processing input, it is creating a probability distribution for the next token. GPT will then take a sampling of the data, which means there will be some randomness in the data. This explains why not every answer is identical.

For the cryptography approach, there is a private key for the function to detect the watermark. This will be possible as the text will have a certain pattern of words that a human should not be able to notice.

It’s certainly an interesting solution. In fact, Aaronson has noted that a prototype of the watermarking has worked quite well.

Yet there are some nagging issues. First of all, a smart data scientist can create an AI system that changes the content enough to evade the watermarking system. This is no different than the constant arms race of cybersecurity. There will always be workarounds.

Next, the watermarking system will only be for OpenAI content. But what about other ones? Well, it will be useless. Basically, there will need to be cooperation among the different providers – which will be no easy feat. Or, perhaps each will have their own watermarking. So you will need to run various tests against the content.

There could be regulation. But governments tend to be quite slow in implementing requirements. They will also inevitably have different approaches. It seems like a good bet that China’s will be starkly different than the EU’s.

Then there is the elephant in the room. We need to trust OpenAI, right? Granted, OpenAI seems to be a responsible organization. But even well-meaning ones have their biases and blind spots.

The bottom line is that there will not be a panacea. With the disruptive and chaotic pace of innovation, there will be plenty of challenges. Although, if there is a silver lining, it’s a good sign that OpenAI is looking at ways to help minimize the problems.

Tags:
Tags: