This week Jaron Lanier asks in The Week: What would an action plan for society look like. New Yorker The “data dignity solution” could be even more crucial.
Here’s the basic principle: Most of our data is currently given away free in exchange for free services. Lanier says that in the age of AI, we must stop this. The powerful models that are now working their way into society should instead “be connected to humans” who have provided them with so much information to ingest.
The idea is to “pay people for their creations even if they’re filtered and recombined in something unrecognizable.”
The concept of data dignity is not new. Lanier first introduced the notion of dignity for data in a Harvard Business Review article titled, “A blueprint for better digital society”, published in 2018.
As he wrote with coauthor Glen Weyl, an economist at the time:[R]The tech sector’s rhetoric indicates that a wave of underemployment will be caused by artificial intelligence (AI) and automation.” Lanier & Weyl note that UBI proponents’ predictions “leave no other outcome” but are extreme. “Either we’ll have mass poverty despite technological advances, or wealth will be taken under the national control of a social fund in order to provide citizens with a universal basic income.”
The authors claimed that the problem was a “hyperconcentration of power” and the “undermining or ignoring of the value created by the data creators”.
Untangle my mind
It is not easy to give credit to others for their contributions. Lanier admits even researchers studying data dignity are unable to agree on how AI models should be accounted for or how detailed of an accounting they should attempt. Still, Lanier thinks that it could be done — gradually.
Alas, even if there is a will, a more immediate challenge — lack of access — is a lot to overcome. OpenAI has previously released some data, but now it is completely closed citing safety and competition concerns. When OpenAI President Greg Brockman described to TechCrunch last month the training data for OpenAI’s latest and most powerful large language model, GPT-4, he said it derived from a “variety of licensed, created, and publicly available data sources, which may include publicly available personal information,” but he declined to offer anything more specific.
Regulators are unsure of how to proceed. OpenAI — whose technology in particular is spreading like wildfire — is already in the crosshairs of a growing number of countries, including the Italian authority, which has blocked the use of its popular ChatGPT chatbot. French, Germans Irish and Canadian data regulators have also investigated how it collects data and uses it.
Margaret Mitchell, an AI researcher who was formerly Google’s AI ethics co-lead, tells the outlet Technology Review that it might be nearly impossible at this point for all these companies to identify individuals’ data and remove it from their models.
The outlet explained that OpenAI would be better off today if data record-keeping had been integrated into the software since the beginning. In the AI industry, it’s standard to create datasets using data scraped from the internet and then outsource the cleaning.
How to Save a life
Lanier’s “data dignity”, if they only have a limited awareness of what their models contain, will be a challenge.
It’s only a question of time before we find out if it makes it impossible.
It is important to determine Below are some examples on how to get started. This is a good way to make someone feel like their work belongs to them, even if the final product looks “different” after a large language model has been applied.
It’s likely that frustration over who controls what will increase, as these new tools reshape our world more and more. Already, OpenAI and others are facing numerous and wide-ranging copyright infringement lawsuits over whether or not they have the right to scrape the entire internet to feed their algorithms.
In an article in the New Yorker, Lanier suggests humans may need to maintain their sanity by not recognizing the contributions people make to AI systems.
He believes people require agency. According to him, universal basic is “like putting everyone on welfare to preserve artificial black-box intelligence.”
Meanwhile, ending the “black box nature of our current AI models” would make an accounting of people’s contributions easier — which would make them more inclined to stay engaged and continue making contributions.
He writes that the problem could be creating a creative rather than dependent class. Which one would you prefer to be a part of?