Expand / Dr. John Timmer, Jeff Ball, Joanna Wong and Lee Hutchinson discuss infrastructure and the environment.
Kimberly White/Getty Images
Last week, Ars Technica Editor-in-Chief Ken Fisher and I braved the skies to kick off an event we hosted in partnership with IBM called “Beyond the Talk: The Future of Infrastructure with GenAI and What’s Next.” We headed west to blessed San Jose. It was great to stand on stage and speak to a room full of interested Ars readers. Thank you to everyone who came! (If you couldn’t make it, don’t worry; we’ll be hosting another event in Washington, DC next month, which you’ll learn more about at the end of this article.)
The San Jose event was held at the Computer History Museum, a completely on-brand and appropriate venue. Ars would like to thank the people at CHM for being so accommodating and accommodating to the geek gathering. .
“Today’s lineup of speakers and topics reflects the complexity and rapid evolution of the technology environment in which we all operate,” Fisher said in his opening remarks to the program. “We will discuss not only the potential of generative AI, but also the challenges it poses in terms of infrastructure demands, security vulnerabilities, and environmental impact.”
panel
As Ken pointed out, our first panel was about the environmental impact of ever-expanding data centers (and the AI services that often accompany them). We spoke to Jeff Ball, a resident researcher at Stanford University’s Steyer Taylor Center for Energy Policy and Finance. Joanna Wong, AI and Storage Solutions Architect at IBM. and Dr. John Timmer, senior science editor at Ars.
One of the main points of the panel discussion, which I had not fully understood before, but which I completely understood after being explained to me, was that not all electricity is created equal. ” was Jeff Ball’s argument. That is, if you look at cloud resources as a means. When shifting environmental costs to third parties, the actual physical location of those cloud resources can have a significant impact on carbon emissions. The cost of using a data center in Iceland and a data center in China may be about the same, but a data center in China is more likely to use coal-fired power and a data center in Iceland is more likely to use geothermal. It will be expensive.
IBM’s Joanna Wong also notes that infrastructure is often plagued by unknown points of failure, problems that aren’t critical enough to cause a failure but consume extra computing (and therefore energy). pointed out. Mr Wong said there is a need to always pay attention to these points of failure. You can worry about the energy costs of new technology, but keep in mind that you’re probably already wasting resources and negatively impacting performance because you don’t understand failure points, or even bottlenecks. Must be.
Enlarge / Joanna Wong (center) answers questions.
Kimberly White/Getty Images
Then we moved into an ever-evolving world of security vulnerabilities and AI-generated (or at least AI-audited) code. Today we’re joined by Stephen Goldschmidt, Global Platform Security Architect at Box. Patrick Gould, Director, Cyber Telecoms Portfolio, Defense Innovation Unit, Department of Defense. Ram Parasuraman, executive director of data and resiliency at IBM, said:
This has been a controversial topic before, most recently at the 2023 Ars Frontiers virtual conference, where security experts expressed misgivings about the idea of AI-generated code. Because most LLMs tend to wildly fabricate things quickly. However, according to our panelists, the most appropriate role for generative AI in coding is likely to be to enhance human coding, rather than replace it, and AI to identify vulnerabilities in code. It helps you spot bug-inducing typos and shove the metaphorical broom behind the human coder’s back to clean them up. error. We’re still a long way from being able to trust completely AI-generated code in production (unless we’re crazy or careless), but what about AI-vetted code? That future is here. Parasuraman said: “The question of how to trust the output of AI will never go away. What will change is how we validate and monitor that output.”
Enlarge / Left to right: Stephen Goldschmidt of Box, Patrick Gould of DIU/DoD, Ram Parasuraman of IBM.
Kimberly White/Getty Images
Finally, our closing panel was about “Taking the Long Game in Infrastructure” – planning for infrastructure in anticipation of unexpected problems. With me was Ashwin Ballal, Chief Information Officer at Freshworks. Karun Channa, Director of Product AI at Roblox. Pete Bray, Global Product Executive at IBM. The question, “How do you anticipate unexpected problems?” is a difficult question to answer, but panelists representing everything from cloud-native to hybrid with many on-premises data centers took on the challenge. I tried it.
Perhaps unsurprisingly, the answer is a combination of smart requirements gathering, resilience, and flexibility. Having a firm grasp of your requirements is an inevitable first step. Once you have successfully planned your requirements, you can start building a resilient infrastructure. If your infrastructure is resilient, and most importantly, if you have emergency operating funds, your infrastructure needs to be flexible to accommodate unexpected spikes in demand. Yes (or at least the ability to temporarily put some money into the load). problem will be resolved). It’s not rocket science. Good requirements planning always wins, even for companies that are actually doing rocket science.