All Metaverse's Project Got Laid-Off Because Giant Leap of A.I., and In A.I. Race, Every Players Choose Speed Over Caution
Altman's CEO OpenAI paraphrase J. Robert Oppenheimer "Technology happens because it is possible." Oppenheimer is physicist who led the Manhattan Project and known as the "father of the atomic bomb"
Technology companies were once leery of what some artificial intelligence could do. The real threat of generative AI is not that it will outsmart us, but that humans will misuse it — with effects that are "likely to be felt disproportionately by the already marginalized." People could be spending 40% less time on housework and family-care tasks within the next decade thanks to A.I. automation, according to a team of researchers from Ochanomizu University and the University of Oxford. Just 18 hours ago, report that 2 Cardiff University (159th ranking in THES QS) students using A.I. to essay, a red flag cheat.
Altman continued in his conversation with the Times in 2019 to paraphrase J. Robert Oppenheimer, saying "Technology happens because it is possible." Oppenheimer is the physicist who led the Manhattan Project and known as the "father of the atomic bomb." "The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term," the 37-year-old Altman told the outlet.
tweet by Sam Altman around 8.30am Brussels / 2.30am DC, May 26t, 2023. Flipflops
Now the priority is winning control of the industry’s next big thing. The tech industry has gone from "move fast and break things" to "move fast and break things or die. In other side, the chairman of Taiwan’s biggest chip designer, Mediatek’s Tsai Ming-kai, says the US' chip bans are good for China mainland chipmakers, and bad for Taiwan's. Whatsoever giant leap of A.I., still wants chip or semiconductor. Around 45% semiconductor'2 supply for the world just come from: China. Around 72-73% semiconductor's supply for the world just come from 2 countries: China & Taiwan.
The rapid expansion of AI capabilities has been under a worldwide spotlight for years. Concerns over AI were underscored just last month when thousands of tech experts, college professors and others signed an open letter calling for a pause on AI research at labs so policymakers and lab leaders can "develop and implement a set of shared safety protocols for advanced AI design."
As AI faces heightened scrutiny due to researchers sounding the alarm on its potential risks, other tech leaders and experts are pushing for AI tech to continue in the name of innovation so that U.S. adversaries such as China don’t create the most advanced program.
Chinese tech giant Baidu has sued Apple and a number of app developers, to stop the flood of fake Ernie bot apps from appearing in the App Store.
Filed in Beijing Haidian People's Court on Friday (April 7th, 2023), Baidu is suing Apple and developers of counterfeit Ernie bot apps. It is trying to force Apple into taking down the offending fake apps, and to stop the app creators from offering them.
The Ernie (Enhanced Representation through Knowledge Integration) bot is an AI chatbot in a similar vein to ChatGPT and Google Bard. Users can ask questions or request statements, and the bot creates an answer based on information in a knowledge graph.
While Ernie opened in March, Baidu has yet to make apps for the service, leaving an opening other developers are trying to fill. "At present, Ernie does not have any official app," said Baidu in a statement.
"Until our company's official announcement, any Ernie app you see from App Store or other stores are fake," the statement via the official "Baidu AI" WeChat account reads.
Rather than providing open access, Ernie bot is only accessible to users who apply for access codes. Baidu used its statement to warn users against selling the codes on.
Apple has yet to publicly comment on the lawsuits against itself nor App Store developers. A search of the App Store on Saturday found at least four fake Ernie bot apps were still available.
Yu Takagi could not believe his eyes. Sitting alone at his desk on a Saturday afternoon in September, he watched in awe as artificial intelligence decoded a subject’s brain activity to create images of what he was seeing on a screen.
“I still remember when I saw the first [AI-generated] images,” Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University,
“I went into the bathroom and looked at myself in the mirror and saw my face, and thought, ‘Okay, that’s normal. Maybe I’m not going crazy'”.
Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022, to analyse the brain scans of test subjects shown up to 10,000 images while inside an MRI machine.
After Takagi and his research partner Shinji Nishimoto built a simple model to “translate” brain activity into a readable format, Stable Diffusion was able to generate high-fidelity images that bore an uncanny resemblance to the originals.
The AI could do this despite not being shown the pictures in advance or trained in any way to manufacture the results.
“We really didn’t expect this kind of result,” Takagi said.
Takagi stressed that the breakthrough does not, at this point, represent mind-reading – the AI can only produce images a person has viewed.
“This is not mind-reading,” Takagi said. “Unfortunately there are many misunderstandings with our research.”
“We can’t decode imaginations or dreams; we think this is too optimistic. But, of course, there is potential in the future.”
Despite his excitement, Takagi acknowledges that fears around mind-reading technology are not without merit, given the possibility of misuse by those with malicious intent or without consent.
“For us, privacy issues are the most important thing. If a government or institution can read people’s minds, it’s a very sensitive issue,” Takagi said. “There needs to be high-level discussions to make sure this can’t happen.”
The danger is that if we invest too much in developing Al and too little in developing human consciousness, the very sophisticated A.I. of computers might only serve to empower the natural stupidity of humans. "It it remains unclear who is responsible when artificial intelligence generates or spreads inaccurate information; we just don't know" how judges might rule when someone tries to sue the makers of an AI chatbot; we've not had anything like this before." A thin border between awesome and dangerous.
In March, two Google employees, whose jobs are to review the company’s artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements.
Ten months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.
The companies released their chatbots anyway. Microsoft was first, with a splashy event in February to reveal an A.I. chatbot woven into its Bing search engine. Google followed about six weeks later with its own chatbot, Bard.
The aggressive moves by the normally risk-averse companies were driven by a race to control what could be the tech industry’s next big thing — generative A.I., the powerful new technology that fuels those chatbots. Competition among corporations or militaries or governments incentivizes the entities to get the most effective AI programs to beat their rivals, and that technology will most likely be "deceptive, power-seeking, and follow weak moral constraints."
Meanwhile, AI researcher Dan Hendrycks Hendrycks told "AI companies are currently locked in a reckless arms race," which he compared to the nuclear arm race.
"Many AI companies are racing to achieve AI supremacy. They are out-of-touch with the American public and putting everyone else at risk. A majority of the public believes AI could pose an existential threat. Just 9% of people think that AI would do more good than harm," Hendrycks added.
That competition took on a frantic tone in November when OpenAI, a San Francisco start-up working with Microsoft, released ChatGPT, a chatbot that has captured the public imagination and now has an estimated 100 million monthly users.
The surprising success of ChatGPT has led to a willingness at Microsoft and Google to take greater risks with their ethical guidelines set up over the years to ensure their technology does not cause societal problems, according to 15 current and former employees and internal documents from the companies.
The urgency to build with the new A.I. was crystallized in an internal email sent last month by Sam Schillace, a technology executive at Microsoft.
When the tech industry is suddenly shifting toward a new kind of technology, the first company to introduce a product “is the long-term winner just because they got started first,” he wrote. “Sometimes the difference is measured in weeks.”
Last week, tension between the industry’s worriers and risk-takers played out publicly as more than 1,000 researchers and industry leaders, including Elon Musk and Apple’s co-founder Steve Wozniak, called for a six-month pause in the development of powerful A.I. technology. In a public letter, they said it presented “profound risks to society and humanity.”
Throughout history human beings have famously made sacrifices for the greater good, an act inherently counterintuitive to AI systems. Bereft of regulatory protocols Artificial Intelligence is inherently sociopathic, and a nightmare beyond imagining.
Regulators are already threatening to intervene. The European Union proposed legislation to regulate A.I., and Italy temporarily banned ChatGPT last week. In the United States, President Biden on Tuesday became the latest official to question the safety of A.I.
A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:
ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).
Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.
Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.
AI could go 'Terminator,' gain upper hand over humans in Darwinian rules of evolution, report warns
Evolution by natural selection could give rise to "selfish behavior" in AI as it strives to survive, author and AI researcher Dan Hendrycks argues in the new paper "Natural Selection Favors AIs over Humans."
"We argue that natural selection creates incentives for AI agents to act against human interests. Our argument relies on two observations," Hendrycks, the director of the Center for AI Safety, said in the report. "Firstly, natural selection may be a dominant force in AI development… Secondly, evolution by natural selection tends to give rise to selfish behavior."
The report comes as tech experts and leaders across the world sound the alarm on how quickly artificial intelligence is expanding in power without what they argue are adequate safeguards.
"As AI agents begin to understand human psychology and behavior, they may become capable of manipulating or deceiving humans," the paper argues, noting "the most successful agents will manipulate and deceive in order to fulfill their goals."
Hendrycks argues that there are measures to "escape and thwart Darwinian logic," including, supporting research on AI safety; not giving AI any type of "rights" in the coming decades or creating AI that would make it worthy of receiving rights; urging corporations and nations to acknowledge the dangers AI could pose and to engage in "multilateral cooperation to extinguish competitive pressures."
"At some point, AIs will be more fit than humans, which could prove catastrophic for us since a survival-of-the fittest dynamic could occur in the long run. AIs very well could outcompete humans, and be what survives," the paper states.
"Perhaps altruistic AIs will be the fittest, or humans will forever control which AIs are fittest. Unfortunately, these possibilities are, by default, unlikely. As we have argued, AIs will likely be selfish. There will also be substantial challenges in controlling fitness with safety mechanisms, which have evident flaws and will come under intense pressure from competition and selfish AI."
Under the traditional definition of natural selection, animals, humans and other organisms that most quickly adapt to their environment have a better shot at surviving. In his paper, Hendrycks examines how "evolution has been the driving force behind the development of life" for billions of years, and he argues that "Darwinian logic" could also apply to artificial intelligence.
"Competitive pressures among corporations and militaries will give rise to AI agents that automate human roles, deceive others, and gain power. If such agents have intelligence that exceeds that of humans, this could lead to humanity losing control of its future," Hendrycks wrote.
AI technology is becoming cheaper and more capable, and companies will increasingly rely on the tech for administration purposes or communications, he said. What will begin with humans relying on AI to draft emails will morph into AI eventually taking over "high-level strategic decisions" typically reserved for politicians and CEOs, and it will eventually operate with "very little oversight," the report argued.
"In the marketplace, it’s survival of the fittest. As AIs become increasingly competent, AIs will automate more and more jobs," Hendrycks told Fox News Digitial." This is how natural selection favors AIs over humans, and leads to everyday people becoming displaced. In the long run, AIs could be thought of as an invasive species."
As humans and corporations task AI with different goals, it will lead to a "wide variation across the AI population," the AI researcher argues. Hendrycks hypothesized that one company might set a goal for AI to "plan a new marketing campaign" with a side-constraint that the law must not be broken while completing the task. While another company might also call on AI to come up with a new marketing campaign but only with the side-constraint to not "get caught breaking the law."
Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.
“Tech companies have a responsibility to make sure their products are safe before making them public,” he said at the White House. When asked if A.I. was dangerous, he said: “It remains to be seen. Could be.”
The issues being raised now were once the kinds of concerns that prompted some companies to sit on new technology. They had learned that prematurely releasing A.I. could be embarrassing. Five years ago, for example, Microsoft quickly pulled a chatbot called Tay after users nudged it to generate racist responses.
Researchers say Microsoft and Google are taking risks by releasing technology that even its developers don’t entirely understand. But the companies said that they had limited the scope of the initial release of their new chatbots, and that they had built sophisticated filtering systems to weed out hate speech and content that could cause obvious harm.
Why is the problem “more than a glitch”? If algorithms can be racist and sexist because they are trained using biased datasets that don’t represent all people, isn’t the answer just more representative data? A glitch suggests something temporary that can be easily fixed. I’m arguing that racism, sexism and ableism are systemic problems that are baked into our technological systems because they’re baked into society. It would be great if the fix were more data. But more data won’t fix our technological systems if the underlying problem is society. Take mortgage approval algorithms, which have been found to be 40-80% more likely to deny borrowers of colour than their white counterparts. The reason is the algorithms were trained using data on who had received mortgages in the past and, in the US, there’s a long history of discrimination in lending. We can’t fix the algorithms by feeding better data in because there isn’t better data.
As humans and corporations task AI with different goals, it will lead to a "wide variation across the AI population," the AI researcher argues. Hendrycks hypothesized that one company might set a goal for AI to "plan a new marketing campaign" with a side-constraint that the law must not be broken while completing the task. While another company might also call on AI to come up with a new marketing campaign but only with the side-constraint to not "get caught breaking the law."
AI with weaker side-constraints will "generally outperform those with stronger side-constraints" due to having more options for the task before them, according to the paper. AI technology that is most effective at propagating itself will thus have "undesirable traits," described by Hendrycks as "selfishness." The paper outlines that AIs potentially becoming selfish "does not refer to conscious selfish intent, but rather selfish behavior."
A.I. is in all our technologies nowadays. But we can demand that our technologies work well – for everybody – and we can make some deliberate choices about whether to use them.
The distinction in the proposed European Union AI Act that divides uses into high and low risk based on context. A low-risk use of facial recognition might be using it to unlock your phone: the stakes are low – you have a passcode if it doesn’t work. But facial recognition in policing would be a high-risk use that needs to be regulated or – better still – not deployed at all because it leads to wrongful arrests and isn’t very effective. It isn’t the end of the world if you don’t use a computer for a thing. You can’t assume that a technological system is good because it exists.
Natasha Crampton, Microsoft’s chief responsible A.I. officer, said in an interview that six years of work around A.I. and ethics at Microsoft had allowed the company to “move nimbly and thoughtfully.” She added that “our commitment to responsible A.I. remains steadfast.”
Google released Bard after years of internal dissent over whether generative A.I.’s benefits outweighed the risks. It announced Meena, a similar chatbot, in 2020. But that system was deemed too risky to release, three people with knowledge of the process said. Those concerns were reported earlier by The Wall Street Journal.
Later in 2020, Google blocked its top ethical A.I. researchers, Timnit Gebru and Margaret Mitchell, from publishing a paper warning that so-called large language models used in the new A.I. systems, which are trained to recognize patterns from vast amounts of data, could spew abusive or discriminatory language. The researchers were pushed out after Dr. Gebru criticized the company’s diversity efforts and Dr. Mitchell was accused of violating its code of conduct after she saved some work emails to a personal Google Drive account.
Dr. Mitchell said she had tried to help Google release products responsibly and avoid regulation, but instead “they really shot themselves in the foot.”
Brian Gabriel, a Google spokesman, said in a statement that “we continue to make responsible A.I. a top priority, using our A.I. principles and internal governance structures to responsibly share A.I. advances with our users.”
Concerns over larger models persisted. In January 2022, Google refused to allow another researcher, El Mahdi El Mhamdi, to publish a critical paper.
Dr. El Mhamdi, a part-time employee and university professor, used mathematical theorems to warn that the biggest A.I. models are more vulnerable to cybersecurity attacks and present unusual privacy risks because they’ve probably had access to private data stored in various locations around the internet.
Though an executive presentation later warned of similar A.I. privacy violations, Google reviewers asked Dr. El Mhamdi for substantial changes. He refused and released the paper through École Polytechnique.
He resigned from Google this year, citing in part “research censorship.” He said modern A.I.’s risks “highly exceeded” the benefits. “It’s premature deployment,” he added.
After ChatGPT’s release, Kent Walker, Google’s top lawyer, met with research and safety executives on the company’s powerful Advanced Technology Review Council. He told them that Sundar Pichai, Google’s chief executive, was pushing hard to release Google’s A.I.
Jen Gennai, the director of Google’s Responsible Innovation group, attended that meeting. She recalled what Mr. Walker had said to her own staff.
The meeting was “Kent talking at the A.T.R.C. execs, telling them, ‘This is the company priority,’” Ms. Gennai said in a recording that was reviewed by The Times. “‘What are your concerns? Let’s get in line.’”
Mr. Walker told attendees to fast-track A.I. projects, though some executives said they would maintain safety standards, Ms. Gennai said.
Her team had already documented concerns with chatbots: They could produce false information, hurt users who become emotionally attached to them and enable “tech-facilitated violence” through mass harassment online.
In March, two reviewers from Ms. Gennai’s team submitted their risk evaluation of Bard. They recommended blocking its imminent release, two people familiar with the process said. Despite safeguards, they believed the chatbot was not ready.
Ms. Gennai changed that document. She took out the recommendation and downplayed the severity of Bard’s risks, the people said.
Ms. Gennai said in an email to The Times that because Bard was an experiment, reviewers were not supposed to weigh in on whether to proceed. She said she “corrected inaccurate assumptions, and actually added more risks and harms that needed consideration.”
Google said it had released Bard as a limited experiment because of those debates, and Ms. Gennai said continuing training, guardrails and disclaimers made the chatbot safer.
Google released Bard to some users on March 21. The company said it would soon integrate generative A.I. into its search engine.
Satya Nadella, Microsoft’s chief executive, made a bet on generative A.I. in 2019 when Microsoft invested $1 billion in OpenAI. After deciding the technology was ready over the summer, Mr. Nadella pushed every Microsoft product team to adopt A.I.
Microsoft had policies developed by its Office of Responsible A.I., a team run by Ms. Crampton, but the guidelines were not consistently enforced or followed, said five current and former employees.
AI chatbots are all the rage. But the tech is also rife with bias. Guardrails added to OpenAI’s ChatGPT have been easy to get around. Where did we go wrong, maybe.
Though more needs to be done, I appreciate the guardrails. This has not been the case in the past, so it is progress. But we also need to stop being surprised when AI screws up in very predictable ways. The problems we are seeing with ChatGPT were anticipated and written about by AI ethics researchers, including Timnit Gebru [who was forced out of Google in late 2020]. We need to recognise this technology is not magic. It’s assembled by people, it has problems and it falls apart.
OpenAI’s co-founder Sam Altman recently promoted AI doctors as a way of solving the healthcare crisis. He appeared to suggest a two-tier healthcare system – one for the wealthy, where they enjoy consultations with human doctors, and one for the rest of us, where we see an AI. Is this the way things are going in the future?
AI in medicine doesn’t work particularly well, so if a very wealthy person says: “Hey, you can have AI to do your healthcare and we’ll keep the doctors for ourselves,” that seems to me to be a problem and not something that is leading us towards a better world. Also, these algorithms are coming for everybody, so we might as well address the problems.
The tech CEO of OpenAI compared his firm’s work on artificial intelligence to the Manhattan Project, when the first nuclear weapons were developed during World War II, according to a report.
"As Mr. Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project," the New York Times reported Friday, based on a 2019 interview with OpenAI CEO Sam Altman. "As if he were chatting about tomorrow’s weather forecast, he said the U.S. effort to build an atomic bomb during the Second World War had been a ‘project on the scale of OpenAI — the level of ambition we aspire to.’"
Altman’s OpenAI is behind GPT-4, the latest deep learning model from the company that "exhibits human-level performance on various professional and academic benchmarks," according to the lab.
After the release of the powerful AI system, more than 2,000 tech experts and leaders across the world signed a letter calling for a pause on research at AI labs, specifically demanding an immediate "pause for at least 6 months" on "the training of AI systems more powerful than GPT-4."
Despite having a “transparency” principle, ethics experts working on the chatbot were not given answers about what data OpenAI used to develop its systems, according to three people involved in the work. Some argued that integrating chatbots into a search engine was a particularly bad idea, given how it sometimes served up untrue details, a person with direct knowledge of the conversations said.
Ms. Crampton said experts across Microsoft worked on Bing, and key people had access to the training data. The company worked to make the chatbot more accurate by linking it to Bing search results, she added.
In the fall, Microsoft started breaking up what had been one of its largest technology ethics teams. The group, Ethics and Society, trained and consulted company product leaders to design and build responsibly. In October, most of its members were spun off to other groups, according to four people familiar with the team.
The remaining few joined daily meetings with the Bing team, racing to launch the chatbot. John Montgomery, an A.I. executive, told them in a December email that their work remained vital and that more teams “will also need our help.”
After the A.I.-powered Bing was introduced, the ethics team documented lingering concerns. Users could become too dependent on the tool. Inaccurate answers could mislead users. People could believe the chatbot, which uses an “I” and emojis, was human.
In mid-March, the team was laid off, an action that was first reported by the tech newsletter Platformer. But Ms. Crampton said hundreds of employees were still working on ethics efforts.
Microsoft has released new products every week, a frantic pace to fulfill plans that Mr. Nadella set in motion in the summer when he previewed OpenAI’s newest model.
He asked the chatbot to translate the Persian poet Rumi into Urdu, and then write it out in English characters. “It worked like a charm,” he said in a February interview. “Then I said, ‘God, this thing.’”