The way we work is about to change | CNN Business




New York
CNN
 — 

In just a few months, you’ll be able to ask a virtual assistant to transcribe meeting notes during a work call, summarize long email threads to quickly draft suggested replies, quickly create a specific chart in Excel, and turn a Word document into a PowerPoint presentation in seconds.

And that’s just on Microsoft’s 365 platforms.

Over the past week, a rapidly evolving artificial intelligence landscape seemed to leap ahead again. Microsoft and Google each unveiled new AI-powered features for their signature productivity tools and OpenAI introduced its next-generation version of the technology that underpins its viral chatbot tool, ChatGPT.

Suddenly, AI tools, which have long operated in the background of many services, are now more powerful and more visible across a wide and growing range of workplace tools.

Google’s new features, for example, promise to help “brainstorm” and “proofread” written work in Docs. Meanwhile, if your workplace uses popular chat platform Slack, you’ll be able to have its ChatGPT tool talk to colleagues for you, potentially asking it to write and respond to new messages and summarize conversations in channels.

OpenAI, Microsoft and Google are at the forefront of this trend, but they’re not alone. IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

The pitch from tech companies is clear: AI can make you more productive and eliminate the grunt work. As Microsoft CEO Satya Nadella put it during a presentation on Thursday, “We believe this next generation of AI will unlock a new wave of productivity growth: powerful copilots designed to remove the drudgery from our daily tasks and jobs, freeing us to rediscover the joy of creation.”

But the sheer number of new options hitting the market is both dizzying and, as with so much else in the tech industry over the past decade, raises questions of whether they will live up to the hype or cause unintended consequences, including enabling cheating and eliminating the need for certain roles (though that may be the intent of some adopters).

Even the promise of greater productivity is unclear. The rise of AI-generated emails, for example, might boost productivity for the sender but decrease it for recipients flooded with longer-than-necessary computer-generated messages. And of course just because everyone has the option to use a chatbot to communicate with colleagues doesn’t mean all will chose to do so.

Integrating this technology “into the foundational pieces of productivity software that most of us use everyday will have a significant impact on the way we work,” said Rowan Curran, an analyst at Forrester. “But that change will not wash over everything and everyone tomorrow — learning how to best make use of these capabilities to enhance and adjust our existing workflows will take time.”

Anyone who has ever used an autocomplete option when typing an email or sending a message has already experienced how AI can speed up tasks. But the new tools promise to go far beyond that.

The renewed wave of AI product launches kicked off nearly four months ago when OpenAI released a version of ChatGPT on a limited basis, stunning users with generating human-sounding responses to user prompts, passing exams at prestigious universities and writing compelling essays on a range of topics.

Since then, the technology — which Microsoft made a “multibillion dollar” investment in earlier this year — has only improved. Earlier this week, OpenAI unveiled GPT-4, a more powerful version of the technology that underpins ChatGPT, and which promises to blow previous iterations out of the water.

In early tests and a company demo, GPT-4 was used to draft lawsuits, build a working website from a hand-drawn sketch and recreate iconic games such as Pong, Tetris or Snake with very little to no prior coding experience.

GPT-4 is a large language model that has been trained on vast troves of online data to generate responses to user prompts.

It’s the same technology that underpins two new Microsoft features:”Co-pilot,” which will help edit, summarize, create and compare documents across its platforms, and Business Chat, an agent that essentially rides along with the user as they work and tries to understand and make sense of their Microsoft 365 data.

The agent will know, for example, what’s in a user’s email and on their calendar for the day, as well as the documents they’ve been working on, the presentations they’ve been making, the people they’re meeting with, and the chats happening on their Teams platform, according to the company. Users can then ask Business Chat to do tasks such as write a status report by summarizing all of the documents across platforms on a certain project, and then draft an email that could be sent to their team with an update.

Curran said just how much these AI-powered tools will change work depends on the application. For example, a word processing application could help generate outlines and drafts, a slideshow program may help speed along the design and content creation process, and a spreadsheet app should help more users interact with and make data-driven decisions. The latter he believes will make the most significant impact to the workplace in both the short and long-term.

The discussion of how these technologies will impact jobs “should focus on job tasks rather than jobs as a whole,” he said.

Although OpenAI’s GPT-4 update promises fixes to some of its biggest challenges — from its potential to perpetuate biases, sometimes being factually incorrect and responding in an aggressive manner — there’s still the possibility for some of these issues to find their way into the workplace, especially when it comes to interacting with others.

Arijit Sengupta, CEO and founder of AI solutions company Aible, said a problem with any large language model is that it tries to please the user and typically accepts the premise of the user’s statements.

“If people start gossiping about something, it will accept it as the norm and then start generating content [related to that],” said Sengupta, adding that it could escalate interpersonal issues and turn into bullying at the office.

In a tweet earlier this week, OpenAI CEO Sam Altman wrote the technology behind these systems is “still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.” The company reiterated in a blog post that “great care should be taken when using language model outputs, particularly in high-stakes contexts.”

Arun Chandrasekaran, an analyst at Gartner Research, said organizations will need to educate their users on what these solutions are good at and what their limitations are.

“Blind trust in these solutions is as dangerous as complete lack of faith in the effectiveness of it,” Chandrasekaran said. “Generative AI solutions can also make up facts or present inaccurate information from time to time – and organizations need to be prepared to mitigate this negative impact.”

At the same time, many of these applications are not up to date (GPT-4’s data that it’s trained on cuts off around September 2021). The onus will have to be on the users to do everything from double check the accuracy to change the language to reflect the tone they want. It will also be important to get buy-in and support across workplaces for the tools to take off.

“Training, education and organizational change management is very important to ensure that employees are supportive of the efforts and the tools are used in the way they were intended to,” Chandrasekaran said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *