Home » AI

Closing the AI skills gap

  • Dan Wilson 

Interview w/ Mike Spaeth, VP of United States AI Institute

Closing the AI skills gap

The phrase “AI skills gap” gets thrown around so often that it risks becoming background noise. One thing that I hear often is How do we bring everyone along, fast enough?

That question framed our latest episode with Mike Spaeth, Global VP at the United States Artificial Intelligence Institute. Mike sits at the intersection of policy, workforce development, and enterprise strategy, so he’s uniquely positioned to talk solutions instead of soundbites.

Policy meets practice

Mike’s résumé reads like an atlas of AI milestones, Watson’s Jeopardy! victory at IBM, generative-AI leadership at EarthDaily, and advisory posts across Microsoft, Google, and government programs.

“I was lucky enough to be there when Watson beat Jeopardy!, it changed my sense of what was possible almost overnight,” Mike recalled.

His policy roots date back to Capitol Hill and the Clinton–Gore tech initiatives, giving him an instinct for translating fast-moving tech into concrete governance.

Defining the modern AI skills gap

We opened by stripping away the jargon.

“It doesn’t have to be a mystery or mystical,” Mike insisted, pointing out how buzzwords intimidate newcomers and stall adoption.

I added my own litmus test: if a tool boosts productivity but staff can’t explain its limits, context windows, hallucination risk, privacy thresholds, then we haven’t closed any gap; we’ve just buried it under automation.

Closing the AI skills gap

Certification as an equalizer – Inside USAII

USAII’s catalog spans K-12 primers, practitioner tracks, vertical-specific certificates (HR, supply-chain), and C-suite strategy suites. Mike called the model vendor-agnostic by design.

“If you only train the C-level, you’ve got maybe 15 people who understand AI. We need the whole organization, the whole country,” he argued.

That mission resonates with my own work at Cyber Intel Training and R&D at Info Science AI, where neuromorphic memory systems only matter if end-users can exploit them safely.

Case study: Malaysia’s National AI Office

Both Mike and I are boots-on-ground advisors to AI efforts in Malaysia. I have been acting as an AI Sovereignty Advisory to Malaysia’s National AI Office. We compared notes on why Malaysia is already eclipsing certain aspects of Western practices.:

  • Clear national-level mandates to certify every citizen.
  • Faster trust curves, surveys show lower AI skepticism in emerging economies.
  • Policy frameworks that bake in sovereignty from day one.

Upskilling the whole org, not just the C-suite

Mike illustrated the stakes with a vivid thought experiment:

“Put two neighboring towns side-by-side. One upskills every citizen; the other doesn’t. It’ll be obvious which town thrives and which falls behind.”

We both agreed that the car analogy still rings true: you wouldn’t teach only the mechanics how brakes work while ignoring the drivers.

Some tools we actually use

  1. ChatGPT-4o (mobile) for just-in-time guidance, Mike even leaned on it after a car crash to walk him through filing claims:

    “It asked, ‘Would you like me to do that for you?’, and given my concussion, that was exactly what I needed.”
  2. Deep Research function on various AI platforms and Semantic Scholar by AI2 for up to current, up to date research. (Dan mistakenly referenced Scholar AI which is an incredible tool worth adopting!)
  3. Local sovereign stacks at Kwaai AI to prototype agent frameworks without data leakage.
  4. Open-weights models (Olmo 2, Llama derivatives) for secure on-prem projects.

Key takeaway: every practitioner should master at least one cloud LLM and one local/edge alternative.

Cybersecurity, sovereignty & trust

Our security segment got candid. I described a demo where a fine-tuned small model “coughed up training emails”, a reminder that privacy disclaimers ≠ airtight protection.

We agreed on four principles for sovereign AI stacks:

  1. Data-at-rest encryption with post-quantum algorithms ready for swap-in.
  2. Endpoint security as the weakest link, especially in home-lab scenarios.
  3. Transparent data-retention policies, no buried opt-in boxes.
  4. Federated or peer-to-peer compute fabrics (as in KwaaiNet) for community labs.

Quantum-era headwinds (and hype)

Mike toured IBM’s quantum lab; I countered with hard numbers: billions in cap-ex, helium-chilled qubits, and error-correction headaches. Until we hit room-temperature qubits, quantum remains a boutique attack surface, but leaders must budget for post-quantum key exchange now, not later.

Closing the AI skills gap

Change-management lessons for leaders

I summarized the mindset shift this way:

“Old thinking says ‘replace people one-to-one.’ New thinking says ‘augment everyone and triple output with the same headcount.’”

Mike’s corollary:

“If you took AI away tomorrow, your team should feel handicapped, that’s when you know upskilling has stuck.”

Practical playbook:

Begin by assessing your current workflows, identifying the high-leverage tasks where AI could help, and calculating what share of all tasks is truly suitable for LLM support. 

Next, skill up the workforce: roll out role-specific USAII training tracks and monitor progress through certification pass rates and the subsequent uptick in project velocity. For critical cyber-threat training, mitigation, and advisory, consider Cyber Intel Training.

Move on to the sandbox phase, where you pit sovereign (on-prem) models against cloud agents to gauge latency, cost, and any privacy incidents, choosing what best fits your risk profile.

Finally, scale what works by baking AI usage directly into OKRs and performance reviews, then track tangible business outcomes such as hours saved and revenue per employee.

Recommended resources & next steps

Closing thoughts

Bridging the AI skills gap isn’t a philanthropic side-project; it’s industrial hygiene. Every knowledge worker, coder, nurse, line-manager, city planner, will soon rely on AI the same way we rely on spreadsheets or search.

Mike’s reminder that “It doesn’t have to be mystical” echoes in my head each time I coach a team on their first prompt. The real magic isn’t in the model’s math; it’s in people discovering they can drive the car, maintain it, and, eventually, design the next model themselves.

I hope these pages give you both a strategic north star and a tactical map. Reach out if your organization wants to take the next step. The gap is closing fast; let’s make sure no one falls through it.

 Join us as we continue to explore the cutting-edge of AI and data science with leading experts in the field.  Subscribe to the AI Think Tank Podcast on YouTube. Would you like to join the show as a live attendee and interact with guests? Contact Us

Leave a Reply

Your email address will not be published. Required fields are marked *