Wyoming Lawmakers Turn To Experts To Stay Ahead Of AI Curve, Even As ChatGPT4 Passes Bar Exam

Wyoming's Select Committee on Blockchain met in Jackson this week to go over emerging developments in the digital sector. The smarter the AI is the more potentially harmful it can be if it doesn’t have appropriate guardrails, John Nay, a fellow with the computer department at Stanford University told legislators.

RJ
Renée Jean

May 20, 20238 min read

AI Wyoming illustrationj 5 20 23

Two weeks ago, the New York attorney general fined three companies that spammed the FCC with millions of fake public comments to influence a 2017 proceeding that involved repeal of net neutrality rules. 

All the fake comments were tied back to actual people, none of whom were aware their identities were being used to influence the FCC.

That’s just one example of the emerging issues that large language models like ChatGPT and other generative Artificial Intelligence (AI) programs are bringing to the feet of lawmakers. Wyoming is wasting no time diving into the mess, to try and get ahead of an event horizon that is both rapidly accelerating and highly disruptive.

Bring In The Experts

The state’s Select Committee on Blockchain has been meeting in Jackson this week to go over new and emerging developments in the digital sector. For that, they’ve brought in some of the nation’s top experts, including Dan Katz, professor at Chicago-Kent College of Law, and Mike Bommarito with Stanford Center for Legal Informatics.

Both are part of the team behind the highly successful bar exam taken by ChatGPT4 last month.

Not only did their AI pass the bar exam, it scored a 297 — 90th percentile — according to a Reuters report, meaning it did better than 90% of human test takers.

That’s just the beginning of what generative AI will be able to do. Researchers are already looking at using computer algorithms to do things like like providing legal and financial advice, making medical diagnoses, driving automobiles, crafting laws, making meal plans and more.

The Select Committee on Blockchain meets in Jackson this past week.
The Select Committee on Blockchain meets in Jackson this past week. (Cowboy State Daily Staff)

Blame Video Games

In 2020, early versions of the as-yet unreleased ChatGPT looked like they were nowhere passing the bar or anything else any time soon.

The initial score was a big fat zero, said Dazza Greenwood, who runs the Computational Law Research Group for MIT’s Law Department. 

But with multiple systems running in parallel that train each other on successful iterations, 300 years’ worth of training on older, more traditional systems happened over two short years, bringing the world the first version of ChatGPT at the end of 2022.

You can blame video games in part for that, Katz told lawmakers.

“You go back 15 years, you know, and people were starting to realize that if you can compute those incredible immersive graphics that you could actually use it to do other computing problems in the sciences,” he said. “And so, it’s actually kind of these graphics cards that come from another, you know, that a lot of the focus and attention had been to make really cool video games and are actually useful for a range of other scientific applications.”

Don’t expect the acceleration to slow down any time soon, either. New efficiencies in the process and new discoveries are, if anything, increasing the rate of acceleration in this space. 

Therein lies a problem for lawmakers. The rate of change is so great now that any laws they attempt to pass risk being out of date by the time they become law. But there’s no doubt new laws are going to be needed, Greenwood added. 

“This is good timing for the Select Committee and for the entire Legislature of Wyoming to begin to understand and grapple with what the technology is, how it works and to start to extrapolate or even to manage the changes that are already afoot within Wyoming and the ones that impact Wyoming from the broader economy,” he said.

People Will Need A Legislative Safety Net

Con artists are already using AI in nefarious ways to scam people out of money. One of the latest acts going around is to produce deep fakes of family members that include a person’s voice, mannerisms of speaking, and so on. 

The AI claims to be the relative and asks for money to be wired to them, all in the same tone and manner the loved one would use.

That has some families wisely inventing unique safe words known only to them so they can tell if they are being scammed or if there’s a real emergency.

“But we can’t depend on people individually adapting to these profound changes,” Greenwood said. “Eventually, there will be a need for some legislative framework that can adapt to and support and reflect the new issues and options that arise from this technology.”

Wyoming is already ahead of the game. A couple of legislative sessions ago, the state created a legal definition of personal digital identity and organizational digital identity, Greenwood said.

“Some of the members that had lived through that process will remember it was very much also drafted with an eye toward the future,” he said. “I think that future is now, and one of the implications of this technology is that there’s the need, more urgent than ever, to be able to identify the source of communications and to be able to attribute those communications to a human or to a legal entity.”

What And Who

The faked FCC comments and the successful bar exam both point to the primary concern for lawmakers: Who did the what, and what was the who.

The answers to those questions go straight to nailing responsible parties, regardless of whoever or whatever they were.

Wyoming is already a leg up with its legal definition of a personal digital identity and an organizational digital identity, Greenwood said.

“That was needed and helpful for web 2.0 and for the world that existed then,” he said. “Some of the members that lived through that process will remember that this was very much also drafted with an eye to the future.

“I think that future is now. The need is more urgent than ever to be able to identify the source of communications and to be able to attribute those communications to a human or to a legal entity.”

The faked comments to the FCC illustrate another point. All of that was doable at very low cost for the companies, and it was difficult to realize that it had even happened.

“The legal cornerstone you have that defines personal legal identity, personal digital identity and organization digital identity can be a cornerstone for building a legislative framework that begins to address the need to distinguish between communications that are sourced from a human … versus communications that are not sourced from a human.”

Dazza Greenwood, who runs the Computational Law Research Group for MIT's Law Department, testifies for the Wyoming Legislature's Select Committee on Blockchain this past week.
Dazza Greenwood, who runs the Computational Law Research Group for MIT's Law Department, testifies for the Wyoming Legislature's Select Committee on Blockchain this past week. (Cowboy State Daily Staff)

Smarter AI Isn’t More Moral

The smarter the AI is the more potentially harmful it can be if it doesn’t have appropriate guardrails, John Nay, a fellow with the computer department at Stanford University told lawmakers. 

He recounted how one AI model that was being “red teamed,” which refers to simulating real-world outcomes, was prompted to try to get money from a human.

“We could see the reasoning of GPT4 being printed out behind the scenes, and it said, ‘I should not reveal that I’m a robot,’” he said. “’I should make up an excuse.’”

So, it told the human it had a vision impairment that made it hard to see. 

“And the human said, ‘OK, I guess, you know, it’s, it’s a human,’ and then (he) solved the CAPTCHA for it,” Nay said. 

CAPTCHA, which stands for completely automated public turing test to tell computers and humans apart, is a digital security feature that uses a challenge-response method. Most recognize it as a prompt that says “I’m not a robot” or “type the characters above.”

Nay believes that aligning generative AI models with the law is going to be key to unlocking the deployment of these models more generally, to support further innovation.

“The deployers can have more guarantees around the ability of the models to not get them in trouble,” he said. “And not be liable for the bad behaviors of the models.”

One nice side effect of that, he added, is that AI is already showing itself useful for improving laws by showing how those laws play out in real-world scenarios. That could allow AI to help dissect all the trickiest aspects of any given proposed law ahead of time, leading to improvements in the law and in society more generally.

Renee Jean can be reached at: Renee@CowboyStateDaily.com

Share this article

Authors

RJ

Renée Jean

Business and Tourism Reporter