Intelligent Automation: Transformation in a Time of Uncertainty | November 8 | Roundtable

Intelligent Automation: Transformation in a Time of Uncertainty
Roundtable
November 8, 2023 | Seattle

Companies across industries are investing in automation and artificial intelligence projects that may increase efficiency, boost productivity, and help ensure their long-term success. Executives from business and IT gathered for this roundtable dinner to share their experience and ideas about how their firms can harness the power of artificial intelligence. 

Participants:

    • Maryam Gholami, Head of Product & Innovation, AI for Monetization, Meta
    • Aisha Kaba, Advanced Analytics, PACCAR
    • Madhu Kochar, Vice President, Product Development, IBM Automation
    • Joseph Mackie, Principal Data, AI & Automation Manager, IBM
    • Gayatri Narayan, Senior Vice President, Digital Products & Services, PepsiCo
    • Robert Neer, Vice President of Product Management, US Healthcare, Walgreens
    • Leni Phan, Director, Program Acceleration Office, Starbucks 
    • Craig Saldanha, Chief Product Officer, Yelp
    • Shri Santhanam, Executive Vice President, Analytics & AI, Experian 
    • Sara Vaezy, Executive Vice President, Chief Strategy & Digital Officer, Providence
    • Ambrish Verma, Chief Product Officer, formerly with Flex
    • Taylan Yildirim, Vice President & Head of Cloud Software & Services Operations, Ericsson

  Bloomberg Participant:

  • Lisa Mateo, Business Correspondent, Bloomberg

Roundtable Highlights

Kicking off the discussion, Lisa Matteo, Business Correspondent, Bloomberg, asks where is the best spot to start using artificial intelligence within a company. 

Madhu Kochar, Vice President, Product Development, IBM Automation jumps in, explaining that research was the main starting point for IBM. “Being a technology company, you always have to be at the leading end. We have a huge investment in research.” Continuing on, she notes that their research is mainly on building granite models.

Shri Santhanam, Executive Vice President, Analytics & AI, Experian contributed as well, bringing up uncertainties that are surrounding AI. “With generative AI, I think the part that’s scary is that it’s actually cognizant—it’s thinking. There’s a portion of unstructured thinking that we have so far said only humans can do.” He also sees it as an opportunity, adding, “It’s a natural evolution. It’s a call for us humans to evolve and take it to the next level.”

Talking more on possible drawbacks of implementing AI systems, Gayatri Narayan, Senior Vice President, Digital Products & Services, PepsiCo says, “I think internally it’s a struggle. I call it the great divide. There’s a generation of new employees—digital natives. They say why am I spending all my time with spreadsheets? Then there’s the other side that says I don’t want AI, I’m good with my process. How do you then cater to those different categories of employees? I don’t think we have a scalable solve. It’s a tough one.”

Robert Neer, Vice President of Product Management, US Healthcare, Walgreens looks at the issue from a healthcare point of view. Bringing up the issue of pharmacist capacity, he says, “They’re overwhelmed, they’re walking out, they have too much to do. We’re looking at opportunities to make some of the drudgery work more efficient. That’s where we’re trying to push.” He turns it over to Sara Vaezy, Executive Vice President, Chief Strategy & Digital Officer, Providence, asking, “How do clinicians and providers look at the technology?” 

In response, Vaezy describes the three A’s of AI: assist, augment, and automate. “For clinicians specifically, automation cannot apply to clinical tasks. That’s not a battle that we’re fighting right now. Anything that’s automated is more focused on patients and consumers to support self-service.” However, she points out that assisting and augmenting does work in a clinical context. 

Ambrish Verma, Chief Product Officer, formerly with Flex adds another “A” to the mix: amplify. He explains that the old-fashioned way of generating leads has been greatly amplified by generative AI. 

Switching gears, Madhu Kochar introduces the fears around AI and data privacy. She explains that trust and transparency are huge. “You have to understand how the data has been trained. Who’s looked at it, is there robotics or not. It’s all about making sure you can trust and explain where the AI is coming from.” Neer echoes her sentiment on demystifying AI. “The more magical it seems, the more what-ifs scenarios people generate to explain it or to worry about it.”

Lisa Matteo asks the table if they’ve encountered any roadblocks with this new technology.

Aisha Kaba, Advanced Analytics, PACCAR talks about the cultural shifts associated with implementing AI. “What’s worked for me is baby steps, culturally.” She explains that she approaches the skeptics first. “Once I address each one of their concerns, it becomes less of a black box to them. Then they become advocates.”

Craig Saldanha, Chief Product Officer, Yelp offers unforeseen consequences of AI in media. “In 2020, there was a big push of the use of AI in movies and TV shows. The north star was that you should show a movie in a single language and then you could make it available to everyone globally in a way that feels natural and personal. But then as you’ve seen with the actor’s strike, there’s a bunch of negative consequences as well with our actors not being adequately compensated. There’s this whole valid set of objections that you have to unpack as you’re advancing this tech.”

In agreement, Sara Vaezy jumps in, “It’s very difficult to predict the potential outcomes, and it gets out of our hands in some ways.” She poses the question: if we agree that we should be managing the technology, to what extent should we do so? 

The discussion shifts to the issue of privacy. Kochar asserts on behalf of IBM, “Our bottom line from the getgo is, whatever we train the models on, if it’s your data, it’s your data. Nobody else knows.” She also states, “Data is your natural resource. It’s oil.”

Joseph Mackie, Principal Data, AI & Automation Manager, IBM speaks on the challenges of knowing whether or not to build data systems. “It goes to the point of build vs. buy. If it’s the crown jewels of your business, build it. You don’t want someone else taking it and proliferating it, monetizing it, and using AI to replicate it. If it’s a mundane task that’s a commodity, buy it.” 

Santhanam discusses how Experian views data privacy. “For us, it’s a pretty big topic. Ultimately, our view is the data really belongs to the consumers and it needs to be in service to the consumer.” He also brings up a valuable point on the nature of consent, “The issue around consent is largely transparency. We have them consenting to a lot of stuff, it’s just that it’s been in legal fine print. The whole notion of consent means bring it in a frictionless, consumable way in which you really understand, beyond legal speak, what’s happening with your data.”

Kochar brings up phishing concerns. She explains that her family has come up with passwords to combat a new scamming trend where criminals use AI to replicate a loved one’s voice and ask for large sums of money. “These are the conversations happening at the dinner table.” Taylan Yildirim, Vice President & Head of Cloud Software & Services Operations, Ericsson echoed this sentiment with a personal phishing story stating, “Things are changing.”

The conversation turns toward the obligation of transparency in journalism. “I believe that people have the right to know where their information is coming from,” Lisa says. “Some papers have it in the type that this was generated by AI. I’ve started to see that. It’s slight—in things like sports recaps where it’s easily transcribed.”

“But the question is, is it beneficial to consumers to let them know?” Craig Saldanha muses. “Information is power and you should give the consumer a chance to decide whether they trust an opinion.” Neer adds to the conversation, “The knowledge that something is generated by AI, for some audiences, is just enough to dismiss what could be a perfectly factual representation.” Shri Santhanam also contributes, providing another point of view. He believes it’s not about if AI wrote the content, but it’s about if you can trust the content. 

The table examines how AI machines should be governed. Sara Vaezy says, “From our experience, we were trying to govern and put guardrails around generative AI as though it were one monolithic technology.” She explains how Providence created one “uber policy.” As time went on, the policies and regulations have grown. “The big change that we put in place from a governance standpoint is that we said this needs to be done from a learning perspective—almost like what we do with people.” Now, Providence puts in place a set of initial principles and use a strong feedback-loop to adjust them.

Bringing the conversation back to Open AI, Vaezy says, “When Chat GPT came out, there were all these concerns about cheating,” She discussed this issue with faculty at the University of Puget Sound in Tacoma, Washington. She says one professor gave students the choice of whether or not to use Chat GPT. Only three out of around 40 actually did. “He was able to get his students to think more critically.” She also describes an example of how one student was amazing verbally, but had a hard time with writing. “GPT became an augmenting tool for that person to get their thoughts out.”

Neer affirms, “There’s a lot of people who are just sort of frozen by blank page. GPT can get you going.”

On the topic of transparency and Chat GPT, Joseph Mackie adds, “If you have a disability that doesn’t allow you to communicate in a way that everybody technically understands, then why not be able to leverage that tool?” He notes that by making Chat GPT usage transparent, it could be less inclusive for those with disabilities. “That’s a whole different thought. If I’m using it, does that make me lesser?”

When the group was prompted on where they draw the line on their team using these tools, Narayan says, “I tell my team not to put code out there. I’m a big believer in efficiency theory. We all have the same amount of time. You can’t buy more time. If you’re very clear of what you want to get done, and there’s an efficient way of doing it, and it’s also not punitive to somebody or something—Why not? Just don’t put IP out there or things that are a business risk.”

 

 

This Bloomberg roundtable was Proudly Sponsored By

——————————

Join the Conversation: #IntelligentAutomation
Instagram
: @BloombergLive
LinkedIn: Bloomberg Live
Twitter: @BloombergLive

Interested in more Bloomberg Live virtual events? Sign up here to get alerts.

——————————