07/29/2024 / By Belle Carter
A group of lawmakers demanded OpenAI CEO Sam Altman submit data on the artificial intelligence (AI) company’s plans to meet safety and security commitments following concerns about the technology and the company’s safety protocols.
The senators, led by Sen. Brian Schatz (D-Hi), have written a letter to Altman for the CEO to commit “to making its next foundation model available to U.S. Government agencies for pre-deployment testing, review, analysis and assessment.”
The four Democrats and one Independent asked a series of questions on how the company is working to ensure AI cannot be misused to provide potentially harmful information to members of the public. This included giving instructions on how to build weapons or assisting in the coding of malware. The team also asked for assurances that employees who raise potential safety issues would not be silenced or punished.
“We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments and the company’s identification and mitigation of cybersecurity threats,” the senators said in the letter.
Altman was quoted saying things like his company is developing “levels to help us and stakeholders categorize and track AI progress.” Reclaim the Net questioned this statement and inquired “who exactly are your stakeholders, Altman?” It also revealed that OpenAI very recently appointed former National Security Agency Director Paul M. Nakasone as a member of its board. “Could Nakasone’s switch from the government to tech be considered as a way to implement ‘prior restraint?'” the news outlet wrote. This only heightens the concerns of Big Tech colluding with the Big Government to ordinary citizens who fear the information that they will be able to access is going to be censored and “pre-bunked.”
Back in July, OpenAI whistleblowers penned a letter to the Securities and Exchange Commission alleging that the company illegally issued restrictive severance, nondisclosure and employee agreements, potentially penalizing workers who wished to raise concerns to federal regulators. In a statement to the Washington Post earlier this month, OpenAI spokesperson Hannah Wong said the company has “made important changes to our departure process to remove nondisparagement terms from staff agreements.”
Meanwhile, tech giants Microsoft and Apple chose not to join OpenAI’s board, despite Microsoft’s $13 billion investment in the company in 2023. This came shortly after the complaints about the growing complexity of AI oversight and increasing attention from regulators. Also, former OpenAI employee William Saunders recently said he left the company because he feared their research could seriously threaten humanity. While Saunders isn’t worried about the current version of ChatGPT, he fears future versions and the development of AI that could surpass human intelligence. He believes AI workers must warn the public about potentially dangerous AI developments. (Related: Former OpenAI employees release “A Right to Warn” document warning about advanced AI risks.)
Altman’s tech firm announced on July 25 that it is launching a prototype of its search engine called SearchGPT. It claims that the site will give users AI-based “fast and timely answers with clear and relevant sources.”
“We think there is room to make search much better than it is today,” Altman wrote Thursday in a post on X, formerly Twitter.
we think there is room to make search much better than it is today.
we are launching a new prototype called SearchGPT: https://t.co/A28Y03X1So
we will learn from the prototype, make it better, and then integrate the tech into ChatGPT to make it real-time and maximally helpful.
— Sam Altman (@sama) July 25, 2024
According to OpenAI spokesperson Kayla Wood, SearchGPT was developed in collaboration with various news partners, which include organizations like the owners of the Wall Street Journal, Associated Press and Vox Media. “News partners gave valuable feedback, and we continue to seek their input,” she claimed.
Upon landing on the search engine, one will see a large textbox that asks the user “What are you looking for?” But instead of giving a plain list of links, SearchGPT will “organize and make sense of them.” After the results appear, researchers are reportedly allowed to ask follow-up questions or click the sidebar to open other relevant links. There’s also a feature called “visual answers,” the Verge reported.
According to the tech site, SearchGPT is just a “prototype” for now and is powered by the GPT-4 family of models. As per Wood, the said site will only be accessible to 10,000 test users at launch. She also said OpenAI is working with third-party partners and using direct content feeds to build its search results. The main goals include eventually integrating the search features directly into ChatGPT.
The launch of this new feature will have further direct implications for Google, which has for years dominated the online search market. When ChatGPT was launched in Nov. 2022, Google had reportedly been hustling to keep pace with the AI arms race. Google’s parent company Alphabet had seen its shares fall more than three percent on Thursday to close at $167.28, while the Nasdaq was down less than one percent.
“Google has been kind of shaking in their boots a little bit since this stuff first popped off,” said Daniel Faggella, founder and head of research at Emerj Artificial Intelligence Research, referring to generative artificial intelligence. “We haven’t seen their company crumble in the interim, but we have seen them kind of fumble.”
Go to FutureTech.news for stories similar to this.
Tagged Under:
AI, Apple, artificial intelligence, bias, big government, Big Tech, Censorship, ChatGPT, Collusion, computing, conspiracy, cybersecurity policies, deception, democrats, future tech, Glitch, Google, information technology, internet, left cult, Microsoft, OpenAI, safety, Sam Altman, search engine, SearchGPT
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2018 EVILGOOGLE.NEWS
All content posted on this site is protected under Free Speech. EvilGoogle.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. EvilGoogle.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.