Skip to content

Three years ago, we published our initial Point of View on AI. At the time, the technology was advancing quickly, but adoption was just beginning.

Today, every agency is talking about AI. And a new wave of agencies is promoting the idea that autonomous AI agents can create content, analyze data and make decisions with little to no human involvement.

That may sound like progress. But for organizations managing real reputational, regulatory and operational risk, autonomy without oversight isn’t the opportunity. The opportunity is working with trusted partners who apply AI responsibly—delivering faster, smarter, more effective outcomes while remaining accountable for the work.

AI is everywhere, accountability isn’t

Anyone can now access AI to generate content, analyze data or automate workflows. Access to AI is not a differentiator or value driver on its own.

What matters is how AI is applied. The advantage comes from how it is governed, validated and integrated into work that is meant to deliver real value and results.

For agencies like JPL serving large corporations, higher education institutions and government agencies, that distinction matters. The risks are not theoretical. They include privacy, confidentiality, bias, brand safety, regulatory exposure, reputational impact, political neutrality and employee relations.

AI can absolutely deliver impact. But without clear accountability, it can also amplify risk.

The value still comes from people

Our experience over the past three years is clear. AI is a multiplier when it is directed by professionals with expertise who collaborate across disciplines and functions.

The value comes from AI generating options and accelerating processes, while people make decisions by applying judgment, context and experience.

AI can help teams move faster. It can expand analysis. It can support stronger execution. But it does not replace expertise. It does not understand nuance the way experienced professionals do. And it can sound confident even when it is wrong.

That is why the control point is not the tools but the people and the process around it.

Embracing responsible use

Our AI Point of View, first developed in March 2023, establishes clear accountability for how we use AI and continues to evolve as the technology advances. The core principle remains the same: Responsible use of AI strengthens expert teams and builds trust with clients.

At JPL, AI supports our people—it does not operate independently.

What accountability looks like in practice:

  • We protect client data by using only secure, non-training AI tools
  • We validate every output before it reaches a client
  • We disclose meaningful AI use
  • We require partners to meet the same standards

Human experts are accountable for every output. Validation is built into the process. Data is protected within clear boundaries. And we do not use AI in ways that could mislead or create false confidence.

This is what turns AI from a risk into an opportunity—especially for mature, established organizations. It enables speed, efficiency, broader analysis and stronger execution across strategy, creative, media, digital and production, while maintaining the standards our clients require and their audiences expect.

Today, there is no advantage to just being able to access AI. Instead, value comes from how well it is applied. At JPL, that means a constantly evolving combination of talented team members, third-party and proprietary data, and agency-developed and customized apps, tools and chatbots—working together with clear accountability for results.

About the Author

Luke Kempski

Luke Kempski

CEO & President

Since becoming president of JPL in 2004, Luke has led JPL to relentlessly evolve into one of the Mid-Atlantic's largest, independent integrated marketing agencies. Luke also serves as CEO of JPL subsidiary d’Vinci Interactive.

Connect on LinkedIn

Headquarters

471 JPL Wick Drive
Harrisburg, PA 17111