TechScape: AI Spending Spree and Creative Concerns
Hi there! Johana Bhuiyan and Dara Kerr here, stepping in for Blake Montgomery, who’s hopefully enjoying some beach time (and hopefully avoiding sunburn).
### Tech Giants Pour Billions Into AI Infrastructure
Tech companies are locked in a race to build the world’s most advanced AI—not just for innovation, but to boost profits and keep investors happy. But developing cutting-edge AI isn’t cheap. It requires massive investments in data centers and infrastructure to power the supercomputers behind these systems. And that comes with a cost: a heavy toll on natural resources and local power grids near these facilities.
Last week’s earnings reports made it clear that tech firms aren’t slowing down. Google announced plans to spend $85 billion in 2025 alone on AI and cloud infrastructure—$10 billion more than initially projected—with spending expected to rise again in 2026. For context, Google brought in $94 billion in revenue just last quarter. CEO Sundar Pichai noted that the company faces a “tight supply environment” for AI infrastructure, meaning results from this spending won’t be immediate.
Google isn’t alone. Amazon plans to invest $100 billion in 2025, mostly to supercharge its cloud-based AI. That’s a big jump from the $80 billion spent in 2024. During a February earnings call, CEO Andy Jassy pushed back on the idea that cheaper tech leads to lower overall spending: “We’ve never seen that to be the case.”
Meta has also ramped up its AI infrastructure budget. In June, Mark Zuckerberg said the company would invest “hundreds of billions” in a network of massive U.S. data centers, including one set to launch in 2026. Originally, Meta projected $65 billion in 2025 spending but later adjusted that to $64–72 billion.
Both Meta and Amazon report earnings this week.
### AI vs. Artists: Can Creativity Be Protected?
AI companies are facing backlash for their impact on creative industries. Artists have seen their work used without permission to train AI models, while creative teams shrink as AI takes over parts of their jobs.
OpenAI CEO Sam Altman has even said: “It will mean that 95% of what marketers use agencies, strategists, and creative professionals for today will easily, nearly instantly, and at almost no cost be handled by AI. No problem.”
In response, artists—including high-profile names like Sarah Silverman and Ta-Nehisi Coates—have filed copyright lawsuits against OpenAI, Meta, Microsoft, Google, and Anthropic. The companies argue that using copyrighted material falls under “fair use,” while artists say they shouldn’t profit from their work without consent. So far, the legal battles favor the AI firms.
Adobe, known for tools like Photoshop, is trying to balance AI innovation with artist protections. It has introduced two “creator-safe” tools:
1. Firefly AI, trained only on licensed or public-domain content.
2. Adobe Content Authenticity, a web app that lets photographers and visual artists flag when they don’t want their work used for AI training and add credentials.
The question remains: Can AI and creativity coexist fairly?Artists can sign their digital creations just like a photographer signs a photo or a sculptor carves their initials into a sculpture, explained Andy Parsons, Adobe’s senior director overseeing content authenticity. We talked with Parsons about the growing world of AI and its impact on creators.
Q: What are the biggest challenges creators face with AI and generative AI?
The main concern is that AI might compete with human creativity—whether it’s individual artists, agencies, or publishers.
Q: Is Adobe Firefly part of Adobe’s effort to protect creators’ work from being copied?
Absolutely. From the start, Firefly followed two key principles. First, it’s only trained on content Adobe has exclusive rights to use—not publicly available material. That means it can’t generate certain things, like celebrity photos, since those likenesses are protected.
Second, we built transparency into Firefly so users know when something is AI-generated. We call this “content authenticity”—clarifying whether something is a photograph, an artist’s work, or AI-made.
Q: What data is Adobe Firefly trained on?
A mix of Adobe Stock and licensed datasets—all materials Adobe has clear rights to use.
Q: How does Adobe prevent copyrighted material from slipping into its datasets?
We only use data we’ve licensed or have rights to. A dedicated team ensures everything is cleared for use. We don’t scrape the open web to avoid intellectual property risks. More training data isn’t always better.
Q: What’s the future of human creativity in the age of generative AI?
We compare content authenticity to a “nutrition label.” Just as you have a right to know what’s in your food, you should know what’s in digital content—whether it’s human-made or AI-generated.
—
UK Enforces New Online Safety Rules After Long Wait
Social media platforms in the UK must now implement child safety measures or face hefty fines. This marks a major shift under the Online Safety Act, affecting platforms like Facebook, Instagram, TikTok, YouTube, and Google.
Read the Guardian’s guide to the new rules.
—
Also in TechScape:
– 18 months, 12,000 questions, and a lot of anxiety: What I learned from students’ ChatGPT logs.
– The real winners of Trump’s ‘AI action plan’? Tech companies.
– Competitions prove humans still outperform AI in creativity.Better than AI at coding – just
AI summaries lead to ‘devastating’ audience loss for online news, publishers say
‘It’s queer, Black joy’: The TikTok creator grilling pop stars and politicians on LGBTQ+ culture
Elon Musk opened a diner in Hollywood. What could go wrong? I went to check it out.