NEW YORK, NY – In most people’s minds, generative artificial intelligence (AI) is strictly associated with the creative industries and the natural language models that output images based on textual inputs called prompts. Much of the hype has been driven by curious social media users, who post the results of their often outrageous queries in search of the next hit of dopamine.
For freelance professional illustrators like Deb JJ Lee, who spoke to Silicon Icarus about the trouble AI-generated art has caused in their career, the “non-existent problem” that these technologies are trying to solve has caused very real economic difficulties for artists, whose work is now seriously undervalued as a result.
Ironically, these same generative AI models have been trained on the work of the very artists whose livelihoods are coming increasingly under threat. Deb herself discovered a set of AI-generated images that were clearly trained on her own work, motivating the Korean-American artist to join a group of anti-AI art advocates seeking to curb the growing influence of generative AI tools in the creative industries.
A lawsuit filed in January against Stability AI – creator of the Stable Diffusion generative AI tool –, by fellow artists claiming copyright infringement has been followed by several other actions, like Getty Images’ own lawsuit, are just the tip of the iceberg. But, there are many other instances of generative AI that have nothing to do with computer graphics and are poised to become far more disruptive than the sea change we are already witnessing.
Hype Machines
Human “touchpoints” are coming under threat in the shipping industry as well, with the advent of generative AI models to predict demand and other forecasting data used to analyze the state of the market and the operations of the industry itself. According to a group of Morgan Stanley equity analysts, the freight business “is on the cusp of a generational shift driven by disruptive technologies” and considers that AI is the most powerful of them all.
Included in the bankers’ list of supply chain disrupting tech are drones, blockchain and autonomous vehicles, the latter of which is projected to slash the cost-per-mile by a whopping 25% to 30% and tip the scales against human drivers within three years, says lead analyst Ravi Shanker. Shanker has been spearheading Wall Street’s media blitz for several weeks, signaling the finance sector’s strong desire for the rapid adoption of this AI use case, in particular.
Shipping giant Maersk is leading the way for the industry itself, using Open AI’s vaunted ChatGPT to “auto-generate FAQs” on its website and issuing a generative AI policy to govern the application of a technology that is nevertheless still not ready for prime time, as reflected by Maersk’s own “human in the loop” policy that requires all ChatGPT responses to be validated by an actual person before they are propagated.
Such a policy not only underscores the limits of generative AI, but also reveals a fundamental truth about the roll out of this technology and its implications for the future of work. Since becoming the “fastest-growing consumer application in history” only months after its release, ChatGPT has already caused serious trouble for a lawyer who bought the hype.
Case Study
Steven A. Schwartz, a lawyer who has been practicing law in New York for three decades, has become the poster boy for the perils of ChatGPT after infamously using Open AI’s chat bot to generate a court filing on behalf of a client, who was suing Avianca Airlines over an injury he suffered at JFK airport.
Unaware that the generative AI service he used to prepare his 10-page brief could simply make up “bogus judicial decisions, with bogus quotes and bogus internal citations”, Schwartz submitted the paperwork to Judge Castel of the Southern District of New York in an effort to thwart the defense’s motion to dismiss.
Dragged by the Judge and colleagues for his embarrassing mistake, the attorney has been hounded by the media for the last several weeks as the man’s tech faux pas landed him in hot legal water over a year after the original lawsuit. Castel ordered Schwartz and his case partner, Peter DoLuca, to appear at a court hearing earlier this month to discuss sanctions against the humiliated lawyers.
On Thursday, Schwartz and DoLuca were fined $5,000 and left any disciplinary actions to the discretion of the New York State Bar Association. In his verdict, Castel declared that the use of ChatGPT in the court system “promotes cynicism about the legal profession and the American judicial system,” adding that future litigants might be able to undermine the “authenticity” of claims.
A less vigilant judge may not have even noticed that the “Varghese” opinion cited in Schwartz’ filing was fake and proceeded to rule on the Avianca case based on completely false case law. In fact, Castel himself may have been fooled if the chat bot’s product were a bit more sophisticated, since it was the “gibberish” of the phony case’s procedural history that tipped him off in the first place.
Such egregious mistakes pose serious enough challenges for the judicial system, but present potentially catastrophic consequences when we are talking about these technologies becoming a part of global supply chains. Generative AI cheerleaders like Shanker and his friends in Wall Street won’t broach such risks in their marketing spiels as they froth at the mouth over a technology that can take their speculative assets to unprecedented heights.
Among the complex supply chain processes Morgan Stanley’s analysts are hoping generative AI will affect are “predicting when trucks need maintenance,” and figuring out “optimal shipping routes”. Maersk wants to use the technology to look at its customer’s transactions and “figure out the root causes” of business losses; in other words, it wants to model client behavior to determine decisions in the board room.
Accidental and deliberate manipulation of the data generated by these systems carry massive risks that can have fatal implications. Market-driven famines and shortages of basic necessities are only some of the problems that generative AI in our supply chains can produce. Maersk’s human in the loop policy is the capital labor market’s answer to these hazards, representing the new model of employment in the post-industrial, cybernetic enclosure.
Game of Thrones
Sketch comedy writer Adam Conover, one of the leaders of the ongoing writers’ strike in Hollywood, explained it succinctly when railing against the ‘gigification’ of labor during a recent appearance on the Majority Report, where he described a scenario in which studio executives could use a generative AI tool like ChatGPT to produce a script and then hire actual writers at a fraction of normal, living-wage rates to “babysit an algorithm” by making the tweaks and applying real-life know-how required in the process of producing a script.
Beyond the typical red herring of “job losses” that usually accompany discussions of AI tech adoption, the truly nefarious implication can be gleaned by Conover’s interlocutor, Sam Seder, who clarified the legal implications of having the AI “write” the script: “The WGA [Writers Guild of America] will often be asked to adjudicate who actually wrote this, who gets the credit for the story, who gets the credit for the script?”. Once the generative AI tool is deemed as the author of a work, writers lose all rights to royalties and any other kinds of residuals that might accrue.
“They’re already skipping out on giving artists a living wage”, says Deb JJ Lee, who was approached by multi-billion-dollar video game company Epic Games to buy out her copyright for a paltry $3,000, which wouldn’t even cover the average rent of a studio apartment in New York City, where she lives. After refusing to hand over the rights to her work for use in the popular game Fortnite, Deb was assailed by people on social media platforms for not caving into Epic Games’ rude offer.
A company of that size should be willing to pay the industry standard of $15,000 for rights in perpetuity, according to Deb, who happened to be in a position to reject the project, but knows that someone else in a “more desperate situation” will very likely take it, and which is exactly what generative AI is meant to do. It is the actual “problem” companies are trying to solve and have been since the days of Standard Oil.
Reducing costs of labor is the fundamental reason AI was developed. As Maersk’s CTO knows, AI has “existed for a very long time”, and “over the years, it has progressed from being interesting research projects to more ‘real’ projects within companies”. These “real” projects all revolve around cutting labor costs and thanks to the advancements in machine learning – which, incidentally, had its genesis in the oil prospecting industry –, CEOs around the world are ready to erect a virtually insurmountable wall between management and labor.
Worse still, the machine will start to take the credit for the work itself, as it is already happening across the creative industries and which has also started creating its own “content supply chain” systems to manage generative AI in the creative industry pipelines. We will become the very babysitters for the algorithm Conover foresees as the machine is enthroned on the highest echelon of the corporate hierarchy.