- AI LA Community
- Posts
- Building AI Responsibly
Building AI Responsibly
Participate in AI LA's new 8-week course
Hey there!
As AI products become a ubiquitous part of our world, it's important to ensure that the next generation of AI products and startups are built responsibly from the ground up. Responsible AI is a holistic vision for building AI ethically, from dealing with data and system challenges to designing how AI interfaces with users and society. Responsible AI ensures that the systems that power the fabric of our new reality are built to support human flourishing and position tech companies at a competitive advantage, offering brand differentiation and building customer loyalty that leads to a stronger bottom line in the long run.
With this in mind, AI LA is in the process of developing a Responsible AI Course to help the next generation of AI startups and products navigate the technical and business challenges of designing responsibly. We're targeting education for early-stage startup founders and more established startups or product leaders at mid-stage companies to operationalize Responsible AI principles within their companies.
The course is envisioned as an 8-week program with a mix of sessions with industry leaders and hands-on workshops powered by the AI Responsibility Lab (AIRL). Course participants will also have access to a network of mentors and partners through AI LA's ecosystem.
This course may be for you if you are a:
Early Stage Entrepreneur (Pre-Seed)
Series A+ Startup Executive
Technical Product Leader at Startup (PM, Product Designer)
Manager at Tech Company that is beginning to develop AI-based products
Please participate in this 5-minute survey to let us know what you might be looking for in a course.
May 31st: Special Interest Group: Explainability
How do machines think?
This question is becoming less academic and more practical daily as AI Systems play a larger role in our lives. Critical to living harmoniously with AI is making sure we can trust it. And critical to trusting AI is our ability to explain and understand how it behaves. How do we explain how AI Systems make decisions? And how should you feel if we can't explain it? When it comes to thinking about smart machines, where should we start?
Join us as AI LA teams up with The AI Responsibility Lab for a crash course on AI Explainability.
We'll cover why AI explainability is a challenge, how we overcome that challenge, and why it's one of the most important questions in AI, business, and society right now.