Going into the first ever HumanX conference in Las Vegas, I was a bit intimidated. AI can feel overwhelming, surrounded by hype, inflated promises, and exaggerated expectations. Before the trip, I read AI Snake Oil by Arvind Narayanan and Sayash Kapoor to better spot when AI promises more than it can deliver.
My biggest concern is, as technologists, we might become too reliant on technology we don’t fully understand and lose sight of our core skills. DoesAI really save us time, or is it creating hidden complexity and toil down the road? Can we trust AI enough to deploy it without extensive human oversight?
The conference didn’t eliminate these fears, but it clarified where pragmatism and humans in the loop fit. Here’s what I took away, and how it connects to how 8th Light is approaching AI solutions.
AI Is a Tool, Not a Replacement
Heading into HumanX, I was thinking about AI replacing parts of the roles carried out by software developers. After attending several sessions and having conversations with others, it became clear that AI serves more as a complement to our work than a substitute for our skills. Ross Harper, CEO at Limbic AI, emphasized this clearly in the context of mental healthcare.
“We can’t just throw an LLM in with a patient and tell it to act as a therapist. We need regulated, evidence-backed AI agents that deliver all aspects of care.”
This resonated deeply with me. Just like healthcare, software involves complex, nuanced problems that require human judgment. AI is great for automating repetitive tasks, running data-heavy analyses, or rapidly prototyping ideas. But deciding what code to write, what design patterns to follow, and how to maintain quality over time, that still requires a human in the loop.
AI tools like GitHub Copilot, or Cursor can rapidly prototype new features, but it’s the technologist's expertise that ensures the code is maintainable, testable, and aligned with a client’s long-term goals.
Open versus Closed AI: Finding Balance
A question I had before the conference centered on trust. Trust from clients and trust from software teams in the tools they use. Discussions around open versus closed AI models directly addressed this:
- Open-source models allow transparency, rapid innovation, and community improvement, but they can be susceptible to misuse or introduce unintended biases.
- Closed-source models offer security, compliance, and control but often lack transparency around how decisions are made.
Arsalan Tavakoli-Shiraji, Co-founder at Databricks, captured a thoughtful balance. “We are always exploring the areas of greatest need. From there, we look to build the best possible AI frontier models to alleviate them.”
His perspective reassured me. It reminded me that technologists don’t haphazardly chase trends, rather we select the tools that best address specific client needs and support trustworthy solutions. Deciding between open-source (e.g., Meta’s Llama) or closed-source models (like OpenAI’s GPT) should always depend on client priorities around transparency, security, and accountability.
Trust in AI Is Our Responsibility
Another significant worry involved whether deploying AI without thorough oversight could inadvertently cause harm. Adrian Blair, CEO at Trustpilot, captured it perfectly, "Trust in the age of AI will come from technology understanding and amplifying human experiences."
This aligns closely with 8th Light’s core values of honesty and transparency. Ensuring trust means rigorous validation of AI outputs, transparent explanations of AI-driven decisions, and a commitment to ethical responsibility in each solution we craft.
Speaking at HumanX, former Vice President Kamala Harris, emphasized the importance of transparency and collaboration between technology sectors and government in rebuilding public trust. She stressed the need to proactively consider broader societal impacts, especially on vulnerable communities, when developing and deploying AI.
When deploying AI-driven recommendation or analytics solutions, we must provide clear, evidence-based justifications, without deferring responsibility to “the model.” And we must actively engage in skill-building and partnerships to ensure workforce adaptability, reflecting Harris’s emphasis on prioritizing skills and fostering educational collaboration.
Knowing Where AI Is Useful and Where It Falls Short
Post my reading of AI Snake Oil, I was already skeptical of some of the more exaggerated AI claims. HumanX reinforced this skepticism, highlighting the specific strengths and limitations of generative AI compared to traditional machine learning:
- Generative AI (e.g., ChatGPT, Claude, Gemini) excels at rapid prototyping, content creation, natural language tasks, and creative outputs.
- Traditional Machine Learning excels at structured data analytics, predictive modeling, decision support systems, and identifying clear, measurable patterns.
However, both forms struggle with deep ethical considerations, nuanced human interactions, complex contextual reasoning, and strategic decisions requiring human judgment. Kevin Weil from OpenAI summarized the rapid progress vividly, “Every two months there’s some new AI model that can do something computers have never done before.”
This rapid evolution underscores the importance of cautious validation, critical thinking, and continuous education. For example, generative AI tools rapidly draft proposals or documentation, but humans must carefully review and refine these outputs to ensure accuracy, clarity, and alignment with specific contexts.
AI Demands Our Curiosity and Continuous Learning
The connections I made at HumanX showed the value of maintaining curiosity and embracing continuous education. Lauren Kolodny of Acrew Capital called AI, “The biggest technological transformation of our lifetime, and there’s real demand from investors.”
Rather than fearing this change, we proactively engage with it by continuously experimenting and educating ourselves. We don’t need to chase every new AI model. Instead, we methodically explore new tools, evaluate their impacts on our workflows, and consistently develop our skills so we lead this change instead of reacting to it.
Practically, we regularly experiment with AI-driven tools such as code-assistants or automated analytics to discover areas of genuine benefit and areas where human judgment remains essential.
Final Reflection
I went into HumanX unsure and somewhat intimidated by the rapid evolution of AI and its implications for technologists. I left with a clear vision of how AI aligns with 8th Light’s values.. Software engineering is evolving, giving us bigger and more impactful challenges to solve.
AI isn’t something we chase. Instead, we use it thoughtfully to amplify our strengths. Our job remains building solutions we’re proud of, solutions clients can trust, and solutions that deliver tangible outcomes.
Let’s build software that lasts. Discover how AI solutions can complement — not replace — human expertise