Hear Ye! Since 1998.
2
Apr 23
Sun

Weekly Report: April 2, 2023

Observations

Initial Thoughts on AI

I have been mulling writing a lengthy post about AI, but there is so much happening, and it’s happening very quickly, so I’m still processing it all. So instead, here is an assortment of initial thoughts. I’m sure there will be more in the future.

The New New Thing. In my lifetime, foundational developments in information technology have happened every decade or so: the personal computer, the internet, and the rise of mobile. All of these developments changed the world. (You’ll notice that absent from my list is crypto and the metaverse.) Generative AI now joins that list and I think not only is it as significant as the others, it’s on track to be in a league of its own.

Speed. The pace at which the technology is developing, and the practical applications and industry springing up to take advantage of it is crazy fast.

  • The most prominent player in the GAI space is OpenAI. GPT-3 was originally released in mid-2020, but it wasn’t until OpenAI released ChatGPT, a chat bot based on the GPT-3 large language model (GPT-3.5 to be precise), that the technology sprang into the public consciousness. ChatGPT received a mind-blowing 100 million sign ups in 2 days in November 2022. OpenAI released GPT-4 last month, and upgraded ChatGPT to use it (available as a paid subscription to ChatGPT Plus). GPT-4 is significantly better than GPT-3, and can handle both text and image inputs. GPT-5 is rumored to be released by the end of the year.
  • The GPT models are static, relying on the information they were trained on at the time. As such, they are more or less frozen in time. ChatGPT plugins allow ChatGPT to access external information sources, and perform actions against external systems. This multiplies the power of ChatGPT. The fascinating thing is that you can use ChatGPT to help integrate itself with other systems.
  • There are tons of companies, products, and use cases that have been created just this year that leverage GPT as well as other AI models that generate images, audio, video, and code.

How It Works. Read Stephen Wolfram’s article for a detailed description of the concepts behind text-generative AI. Even if you don’t understand all of it (I certainly didn’t), you’ll still get a good idea of how it conceptually works even if you have to give up halfway through.

Use Cases. There are far too many use cases and examples of amazing things generative AI can do to list. Here are a smattering of interesting ones I have been thinking about lately:

  • Mimic someone’s voice just by providing an AI model with 30-60 seconds of audio. The output comes complete with accents, foreign languages, and vocal tics.
  • Inventing an idiom.
  • Out of all the major voice assistants (Google, Alexa, Siri), Google is the most capable. But it’s no ChatGPT. Once you plug AI into them, you’re going to see supercharged voice assistants capable of conducting conversations. It’s not only the quality of their answers that will improve, or what actions they will be able to perform, but AI will improve the quality of voice recognition too. And if you want your voice assistant to sound like anyone on the planet, it’ll probably be possible to do that one day (see “mimic someone’s voice” above).
  • Within literally 10-15 minutes, I was able to create a working script for monitoring the prices of securities (both by scraping stock prices from a webpage and crypto prices from the Coinbase API), recording prices to a database, and emailing price alerts, just by asking ChatGPT about 5 questions (some debugging was still required).
  • In 2017, I spent a week writing code that helped me arbitrage BTC and ETH between a half dozen exchanges. I estimate I could have completed that in maybe a third to half of that time if I had help from AI.
  • Technical analysis and algorithmic trading.
  • Legal document analysis and review.

Existential Risk. AI has issues. A ton of issues. The most splashy one is existential risk. The basic idea is that AI at some point will be able to develop itself, leading to a runaway super-intelligence that outstrips our ability to control it (a technological singularity). The human race becomes casualties, whether through some concept of malevolence or just collateral damage as a result of “optimization”. It doesn’t necessarily require an AI that is “self-aware” — it may just be the rampant execution of internal logic that results in major unintended consequences.

  • An open letter signed by Elon Musk, Steve Wozniak, and a host of luminaries in the AI world calls for a six month pause in the training of AI systems more powerful than GPT-4. They write:

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.

Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

  • Of course, there are those that plain disagree, saying that AI is nowhere near the stage which we should worry about (e.g. we’re still far away from building an artificial general intelligence), or that the worry-warts have ulterior motives (they want the additional time to catch up to Open AI), or that in practice there’s no way to halt AI development, so why bother?
  • The latter school of thought is emblematic of today’s world in which lack of expertise in a field isn’t a barrier to expressing a strong opinion. Strong opinions are provocative, and that is a means of getting attention. In reality, we just don’t know what’s going to happen with any certainty. Even the creators of generative AI models can’t predict what their own creations will do as they develop. But when the potential risks are astronomical, then even a far-fetched and fanciful probability that they may come to light means those risks deserve thoughtful introspection, humility, and caution.
  • Given the potential for wide-ranging harm that generative AI has, I do think we need to get some tentative regulations and protocols in place, or at least start a robust discussion about it between all the major players. Had we known that social media would have the side effects it does today (misinformation, teen depression, etc.), perhaps some things may have turned out differently?
  • The idea of the Great Filter is one theory why we haven’t found intelligent alien life yet. The theory goes that there is some universal, insurmountable step that prevents civilizations from expanding into the galaxy. If it exists, we cannot know whether we have already passed that step, or whether it lies ahead of us. Nuclear war and climate change are traditional candidates for a future Great Filter that are in play. Runaway AI is another one that has been theoretical to date, but it feels more real today than ever.

More Issues.

  • Dystopia. Even if AI turns out not to be an existential threat, I don’t think anyone disputes that it will eventually cause massive societal change. The short story Manna presents a dystopic and utopic view of AI. The dystopic threat doesn’t look like Terminator robots, or a Skynet-type overlord, but simply a bunch of machines optimizing situations without any regard for humanity. And consider the court system. Courts are the final avenue of appeal for disputes, including those that result from automated decision making (e.g. your insurance claim got denied). Now, it doesn’t take a big leap of imagination to cut out human judges and start having computers render judgments in smaller forums, like small claims courts. After all, computers have the ability to synthesize the entire history of jurisprudence to figure out a ruling. And then, keep sliding down that slippery slope. (There’s a reason why the GDPR requires human intervention to be available where an individual disputes an automated decision based on their personal data.)
  • Bias & Misinformation. Generative AI has issues with bias — including political and racial — and being confidently incorrect. Therefore, there are real concerns about models having the ability (or some would say being intentionally imbued with the ability) to persuade and misinform, and the ability to help with misinformation campaigns. Additionally, Twitter believes that the challenge of detecting bots is much harder with AI in play, since sorting out who’s human and who’s not just got more difficult. In response, they are moving focusing on paid subscriptions as a way to help verify identity.
  • Criminal Enterprises. This one is going to be a real big problem. Think of all the scams you can improve with AI. Related to deepfakes.
  • Legal Issues. There’s a lot here.
    • The biggest one relates to copyright infringement. AI models are trained on a huge amount of data, and almost all of this data is owned by someone else. Is use without permission infringing, or is there a fair use argument?
    • There are also data privacy concerns. Italian privacy regulators have banned ChatGPT due to concerns with GDPR compliance.
    • Ownership of materials generated by AI is also up in the air. Just like for a photo taken by a monkey, art generated by a computer is not eligible for copyright protection, according to the U.S. Copyright office.
    • AI makes mistakes, and so people may get into various sorts of trouble if they blindly rely on what AI says. You can put all the disclaimers you want, but people aren’t going to be bothered to double check what the computer says a lot of the time.
    • Information ingested by AI models also raises issues relating to compromising the confidentiality of any sensitive information obtained. The information itself isn’t a discrete component of a model, so it’s hard to extract.
  • The Chinese Government. Using AI models requires a great deal of computing power. Queries are therefore sent off devices for processing in the cloud, but it’s foreseeable that devices will be able to do that processing on-device at some point. There’s a hypothesis that Apple, with its impressive homegrown chips, is maneuvering into a position to be able to do this (and it’s also in line with its emphasis on privacy). It’s tougher to censor what is happening on a device, so this represents a challenge for the Chinese Government’s censorship efforts. I imagine this will lead to the technology and any hardware that runs it being heavily regulated, or even banned. It is of course possible to build models that return answers that toe the party line (OpenAI has been accused of injecting liberal bias into their models, and they also censor results that are deemed harmful), but AI is tougher to directlycontrol because it’s a bit of a black box. Travel blogger Gary Leff writes: “Perhaps AI and Large Language Models represent the next best hope for an end to repression in China. If China wants to compete in this space, they’ll have difficulty doing it behind the Great Firewall. Their tools won’t be as strong as the ones from the West, with access to more knowledge to train on. Is there a stable equilibrium where their AIs can train on unrestricted content, but answers from the AI remain restricted? What about when everyone has AI chatbots on their phones, rather than central servers?” I’m unfortunately not optimistic about this, as in earlier times people thought the internet couldn’t be tamed, but the Chinese Government has done a pretty darned good job of it.

Miscellaneous Stuff.

  • It has been said that the internet makes information globally available, so gaining an advantage is less about who has access to information, but who can analyze it the fastest and execute on the actions needed to capitalize on it. Now, with AI helping to analyze information, and potentially being able to act on it, where does that leave humans? A lot of valuable content will now consist of real-time reporting and opinion pieces, since research and summarization is something that AI is pretty efficient at.
  • I’ve also tried Google Bard and Bing Chat, but ChatGPT is the best (and it can write code).
  • ChatGPT uses a “prove you are human” Captcha, which I thought was ironic. Chaptcha’s days are numbered.
  • When I joined a law firm after graduating from law school, a second year solicitor was assigned to show me the ropes (we both reported to the same partner and were on the team that did work for Microsoft). That person is now the Chief Responsible AI Officer for Microsoft. I haven’t spoken to her in the 15 years since I left the firm, but that job must really be something!

Further Observations

  • Trump got indicted on over 30 counts on Thursday. The counts are supposed to become public next week. Interesting times ahead.
  • The CTFC is going after Binance. Binance’s response is a bit lame, and there’s a lot of stuff in the CFTC complaintthat really doesn’t look good for Binance. One example: “Binance’s senior management, including Zhao, knew the Binance VPN guide was used to teach U.S. customers to circumvent Binance’s IP address-based compliance controls. In a March 2019 chat, Lim explained to his colleagues that ‘CZ wants people to have a way to know how to vpn to use [a Binance functionality] … it’s a biz decision.’ … in a July 8, 2019, conversation regarding customers that ought to have been ‘restricted’ from accessing the Binance platform, Lim explained to a subordinate: ‘they can use vpn but we are not supposed to tell them that … it cannot come from us … but we can always inform our friends/third parties to post (not under the umbrella of Binance) hahah.’”

Articles

Movies & TV

  • John Wick: Chapter 4
    You know what you’re getting, and there’s 2 hours and 49 minutes of it. And Keanu only speaks about 360 words in the whole movie. I was thoroughly entertained.

Charts, Images & Videos

Source: Republic Bank

On Twitter

  9:00pm  •  Uncategorized  •   •  Tweet This  •  Add a comment

Commenting is closed for this post.

Commenting is now closed for this post. Thank you to those who contributed.