Apple tests ‘Apple GPT,’ develops generative AI tools
Though Apple has woven AI features into products for years, it’s now playing catch-up in the buzzy market for generative tools.
Though Apple has woven AI features into products for years, it’s now playing catch-up in the buzzy market for generative tools.
Meta CEO Mark Zuckerberg said the company is partnering with Microsoft to introduce the next generation of its AI large language model and making the technology, known as Llama 2, free for research and commercial use.
The wave of lawsuits, high-profile complaints and proposed regulation could pose the biggest barrier yet to the adoption of “generative” AI tools, which have gripped the tech world ever since OpenAI launched ChatGPT to the public late last year.
The fast-moving AI landscape is creating a dynamic in which corporations are experiencing both “a fear of missing out and a fear of messing up.”
Mobile and desktop traffic to ChatGPT’s website worldwide fell 9.7 percent in June from the previous month, according to internet data firm Similarweb.
AMD joins a growing list of technology companies trying to take advantage of a broader interest from businesses looking for new AI tools that can analyze data, help make decisions and potentially replace some tasks currently performed by human workers.
The growth of artificial intelligence tools and the increasing sophistication of the technology has made AI a target for regulators, especially in hiring.
The U.S. government and private sector in recent months have begun more publicly weighing the possibilities and perils of artificial intelligence.
Twitter owner Elon Musk has previously accused Microsoft and its partner OpenAI of “illegally” using Twitter data to develop sophisticated AI systems such as ChatGPT.
Amazon.com is rivaling efforts by Microsoft and Google to weave generative artificial intelligence into their search engines.
Computer scientists who helped build the foundations of today’s artificial intelligence technology are expressing concerns, but that doesn’t mean they agree on what the dangers are or how to prevent them.
Purdue University on Friday announced a major undertaking and a multi-million-dollar investment to advance the university’s commitment to computer sciences, artificial intelligence and microchip research.
Lawmakers are anxiously eyeing the AI arms race, driven by the explosion of OpenAI’s chatbot ChatGPT.
More than 1,100 people in the industry signed the petition, which warns that AI systems with “human-competitive intelligence can pose profound risks to society and humanity.”
Microsoft is infusing artificial intelligence tools into its suite of office software, allowing users to do things like summarize long emails, draft stories in Word and animate slides in PowerPoint.
Web 3.0 is built upon the core concepts of decentralization and openness. Its features include artificial intelligence, machine learning and blockchain technology.
A January survey of 300 human resources leaders at U.S. companies revealed that 98 percent of them say software and algorithms will help them make layoff decisions this year.
The tech industry’s latest artificial intelligence constructs can be pretty impressive at some things. But they’re not so good—and sometimes dangerously bad—at handling other seemingly straightforward tasks.
Last month, Faegre Drinker announced that Indianapolis-based partner Scott Kosnoff would co-lead an interdisciplinary artificial intelligence and algorithmic decision-making team that the firm calls AI-X. IBJ talked to Kosnoff about the team.
The federal government said Thursday that artificial intelligence technology used to screen new job candidates or monitor worker productivity can unfairly discriminate against people with disabilities.