Artificial Breakdown

2. AI Literacy + Ethics | Stephanie Enders

ZGM Season 1 Episode 2

Today Carrie and Pete talk to Stephanie Enders, Chief Delivery officer at Amii (Alberta Machine Intelligence Institute). Listen in as they chat about AI literacy, why ethics matter and what the heck is up with Alberta’s AI adoption rates.

Links:
Upper Bound 2025: https://www.upperbound.ai/ 

Stephanie’s Upper Bound 2024 Talk: https://youtu.be/AtkskoAvyw0?si=DvoDf0O2Fo9kiAys 

Amii: https://www.amii.ca 

Send us a text

Carrie (00:50)
Alright, welcome to Artificial Breakdown. Today we have Stephanie Enders on, who I'm very excited to chat with. Steph is the Chief Delivery Officer at AMII, which is the Alberta Machine Intelligence Institute, located in Edmonton, but also all over, from what I understand, everywhere.

Stephanie Enders (01:08)
Everywhere.

Carrie (01:10)
It's everywhere, just like AI itself. yeah, I saw you speak at Upper Bound this past year, and I've just been literally thinking about that chat ever since I saw it. So I'm very excited to have you on. Welcome.

Stephanie Enders (01:22)
you

Thanks for having me. It does feel like AI is everywhere and maybe that Amii is everywhere, but we're definitely on that mission of AI for good and for all. we're headquartered here in Edmonton, Alberta, Canada, but we serve, of course, an Alberta mandate. And then more and more so we're seeing ourselves really serve Canadian companies coast to coast to coast and have great international partnerships.

I guess, Edmonton proud, Edmonton legacy of AI research, but definitely like repping that Edmonton brand and that Edmonton loyalty around the world.

Peter Bishop (02:02)
I like that. I'm Edmonton born, so I still have roots there, which is nice. How long has Amii been around?

Stephanie Enders (02:09)
So the not-for-profit that you see and interact with today was founded in 2017 through an initiative called the Pan-Canadian AI Strategy. Canada is actually the first country in the world to have a national AI strategy and part of that was really anchoring these acceleration hubs that could find ways to really solidify our future in foundational AI research and then as we grew, find a path to commercialization.

Here we were founded in 2017 based on a long history of AI research in the Edmonton region. So initially that first group of AI researchers came together in 2002. So almost like more than 20 years ago to start that research legacy. They grew their practice over time at the University of Alberta, allowing for Edmonton to be one of those hubs when the strategy was announced in 2017.

Peter Bishop (03:05)
2002, it's amazing that people were thinking of this that early, considering it all just kind of came to a head over the last, feels like 24 hours, but like last year.

Stephanie Enders (03:06)
Good luck.

I guess, well, it's that kind of pop culture. takes 10 years to be an overnight success. And so I think Canadian AI is really a testament to that is when the general public has the access to different tools, it might feel like an overnight success. But there's decades and decades of research specifically in Canada that's really allowing for these new breakthroughs to happen.

Peter Bishop (03:21)
Mm-hmm.

Carrie (03:44)
I knew Edmonton is quite the hub for AI and has been for a long time, which is nice because now I feel like people are recognizing it. So have you, you've been involved in the space for how long? How long have you been in AI?

Stephanie Enders (03:47)
I joined Amii just over four years ago. so prior to that, I had spent a lot of time in tech entrepreneurship through an organization called Startup Edmonton and then with Edmonton Economic Development with Innovate Edmonton. And so I wasn't new to the technology space or the startup space, but I was definitely new to AI. And at that time it was really a chance to kind of focus my personal professional practice. So I started off in agency land.

where you're surveying a bunch of different clients and you're helping them gain success. And then I moved into kind of building mode with one organization. But then over time, the kinds of things we were building were going broader and broader and broader, because tech is quite diverse. And so coming to AMII was a chance to do that same shift to focus on one particular avenue or domain of technology, which I hadn't had before, which I was excited about.

Carrie (04:55)
That's interesting. What kind of drew you to this domain?

Stephanie Enders (05:00)
Well, I think potential is probably the big one. It was a chance to explore something that had this great legacy, but also had the opportunity to impact in a lot of different ways. So it wasn't just about shipping a single product, but it was about how do we really find ways to leverage this opportunity.

Carrie (05:02)
Mm-hmm.

Stephanie Enders (05:26)
like it sounds cheesy, but for good, for ways that have maximum impact. Cause I think that piece of like citizenship is something I really value in my professional practice. so technology has been a path to kind of bring that to life beyond kind of the traditional ways, which is volunteering or public participation.

Peter Bishop (05:48)
Well, it's funny because you hear so many so much of the bad with AI. think everyone's worried and you know about jobs and robot slaves and all those things. But there's so many good stories that are coming out. I can't remember if I brought this up before, but I have a friend of mine who was using AI to generate stories for his daughter. So his daughter would just come up with characters like Mickey Mouse and Iron Man and then get AI to generate a bedtime story and then he'd read it to her at night, which I think is a really cool application of AI versus the one where everyone's like cheating on their exams. You know what I mean?

Stephanie Enders (06:27)
Yeah, so I think it's that piece of discovery that sometimes gets lost in the narrative about responsible or safe or ethical AI is the things that we do as people that I think are exciting, like being curious and being creative. It's good to keep in mind that AI is a tool. It's a tool made by humans. We're in control.

We're in control about how we use these applications and we do have influence in autonomy. We're not at the point, and I don't think we'll be at the point for a long time, where there's this kind of, or ever, a singular AI that has all those things and is in a position where we don't have influence in autonomy, but there's definitely AI systems that maybe have been developed without ethical practices that have impact on people's autonomy and their success and their wellbeing. And so I think that's the really nuanced conversation is people tend to want it to be really black and white. And when it needs to be, it can be black and white. But I think there's also, especially conversations around curiosity and creativity in maybe the arts community or in the creative arts community, maybe even in the marketing community around some of these tools, specifically generative AI or agentic AI tools, being an aid.

Carrie (07:41)
Thank

Stephanie Enders (07:49)
to creativity and an aid to curiosity versus like a curiosity killer or a creativity killer, they can live in the same landscape.

Carrie (07:59)
Yeah, and I've said this before too, it's like when you think of AI as less than just like one thing and more as it's like the internet. There's a lot going on. There's so many different tools within this. Like AI itself isn't a tool, it's an umbrella. And there's so many different things you can do with it.

Stephanie Enders (08:15)
Mm-hmm.

Carrie (08:17)
So it is hard to say like this one thing is bad or this one thing is good, which is I appreciated about the talk that you did at upper bound, which was in Edmonton in May, twenty twenty four.

Stephanie Enders (08:30)
Mmm.

Carrie (08:31)
I really like, like you said, it's these nuances, like we need to talk deeper and more about all the different little things. So what you had talked about and you were up there with Mara, is that a different person? Mara Kym? What a cool title, you guys have the coolest titles.

Stephanie Enders (08:43)
Yep, with Mara, she's our Product Owner for Advanced Technology.

Peter Bishop (08:49)
Yeah, we seriously gotta look at our titles.

Carrie (08:52)
Yeah. So you guys talked about risk management in the age of AI, colon, navigating ethical and legal dimensions, which is a huge topic. So I guess my first question about that stuff is, why is ethics important when we're talking about AI to you?

Stephanie Enders (09:13)
that's a good question. I think the main reason that ethics is important is that we really need to be human centered in our practice. And so the people that are impacted by the decisions technology makes should be part of this cycle of evaluating the impact and understanding it. 

And so that's one of the reasons we're really focused on AI literacy here is it's really important that there's like a shared language so folks can have really informed discussions. And so we want to leverage this technology for its mass potential. Because I also think that would be unethical to say there's this technology here, but because these conversations are challenging, we're going to park it and not look at the opportunity. 

Because that's also not being of service to our communities, our planet, and our people. It's really that conversation of how, when executed, with the most understanding, with the most diverse perspectives related to the specific challenge that we're trying to solve, can this tool be applied for more benefit? So that's, guess, the reasons why ethics are important. I think that talk did have very broad title, likely because we needed to figure out a title before we had figured out content, if I'm being super honest. 

Carrie 
It feels like sometimes, right now, well maybe not now as much, but before, it was like, people are gonna try crazy things with AI, and they're gonna try and break it, and they're gonna try and get it to tell you how to build a bomb and trying, you know, even though it's not supposed to be allowed to and stuff like that. But so it almost feels a little, there's like a bit of catch up. It's like, okay, the public has figured out how to do this. Now let's make sure it can't do this. And it's like, now they figured out this thing. Let's make like, let's plug that hole, plug that hole, plug that hole. Do you think that we're getting into a space now where it's people are trying to be a bit more preventative off the get go?

Stephanie Enders (10:52)
So I think what you're talking about are guardrails. So the goal is kind of like, what is the intended purpose? And then what are all the ways that maybe this tool is going to be used in ways we didn't anticipate? So I do think there's a lot of deep thinking that goes into these kinds of tools and models before they're released to the public mitigate that risk.

Carrie (11:17)
Hmm. You have other... I'm just like plug holes!

Stephanie Enders (11:43)
I think that's where the big push in the last few years on something called AI safety has been, which is really identifying what is the risk of AI, what's the risk to AI, and then figuring out at a level and a pace that can be of service to the people using it while also keeping the technologies safe for people to use. I would say it's an emerging field. 

There's a lot of great first steps, there's international agreements and bodies, but I think on AI safety specifically, there's still a lot of science to be done. And that's something we're really looking forward to doing is advancing our understanding of AI safety and kind of the work to be done there so that we can continue to inform those practices. I think the other piece is really understanding through curiosity, those pieces as you're building an AI tool or application. 

And so what we do here is really a lot of proof of concept work, which means we're experimenting and working alongside businesses to see if AI and machine learning tools can help solve the problems their businesses are facing.

Carrie (13:05)
Mm-hmm.

Stephanie Enders (13:06)
What that also allows us to do is have a bit of a ramp up period to think about what are some of the risks and mitigation factors that really need to be worked through before it goes into deployment. So it's that piece of like really getting to know your technology before it's out in the world and that you have done your due diligence to understand what those guardrails might need to be before a customer or the public or even yourself are using it.

Carrie (13:36)
Right.

Stephanie Enders (13:36)
Mm-hmm.

Peter Bishop (13:37)
What I find, because I hear every so often that Canada is way behind on everything, like all the companies or even Alberta, like companies aren't quick to adopt or the policies, no one even knows what questions to ask, like how do feel Canada is stacking up against like other countries at the moment?

Stephanie Enders (13:46)
Yeah.

So we are slow, despite this real head start on fundamental research. think Canadian technology adoption and Canadian productivity challenges go hand in hand. I don't think it's a surprise to anyone that productivity continues to be a challenge in the Canadian workforce and the labour market and technology adoption is that kind of same piece. 

So there seems to be new numbers all the time coming out about how which companies are using AI or adopting AI. I think from the last one I read, it's about 6% of Canadian companies are using AI and we're a little bit farther behind in Alberta. But I think with every passing day, there's probably a discrepancy between companies that have the use of AI in a strategic plan that are moving forward in a comprehensive.

AI strategy versus companies that have individual employees within the company using generative AI tools to support their individual productivity. So that's something I'm really interested in learning more. Is this a closing gap or a growing divide between that kind of dichotomy of companies perhaps lacking an AI strategy to solve business problems versus individual employees using AI tools, sanctioned or not sanctioned, to really improve their personal productivity, and what that means for different organizations. 

I don't have the answer to that question, but I'm seeing that happen more and more where there might be a growing gap between the intended technology adoption strategy and the rate of adoption by individuals.

Carrie (15:48)
That's so interesting. Just, a friend of mine recently said like they're not allowed to use that. Like they cannot access any AI open source or closed anything at work. And I was like, how do you, I'm sorry, how do you your job? How do you work? Like now it's become such a tool for me as a writer and in a lot of other ways.

And to be fair, he works for a pretty big government organization. I was like, so do you still use it? And he's like, no. And I was really expecting him to say yes. And he was like, no, I will lose my job. And I was like, what? Which I get it. I do get it. it is, like. That 6 % blows my mind that it's that low.

Stephanie Enders (16:18)


Yeah, and I think, well, I think that there's probably real concerns that they're addressing by implementing something like a total ban. And so when I think of maybe a company that's put in a total ban on its two things is there's probably a really intensive data set or set of data sets with really private information. And so in an effort to keep data

Carrie (16:41)
100%.

Stephanie Enders (17:06)
secure and private, the path that they've chosen is this like opt out. The second set is really that piece around strategy. We don't have a framework in place yet to prioritize who should go first, which projects, which people, what risk should go first for evaluation. And then the third is really that piece of

Carrie (17:24)
Mm-hmm.

Stephanie Enders (17:36)
We haven't identified what the definition of an AI tool is. So without that nuance, how do we say yes to anything? Because I think that's even something here that is interesting. And so we have something called a principled AI framework. And it kind of is the roadmap for our teams on how do we go from our responsibilities, like what we truly believe is what we are responsible for with the development of AI and our practices and our initiatives, how do we bake that into process every day? And one of them is procurement. And so something that's just come up that's not in the framework that I have to go and update is right now in the framework, there's an AI software procurement process. So if you are buying software that has like AI as an identified feature, there's just a quick rubric that goes alongside our IT procurement processes so that we...

have some checks and balances that we know what AI tools we're using. That's been in the framework since it was launched. What's not in the framework is what do do when existing software that has been part of the organization adds an AI layer? Who gets to turn it on? does it go like, we're not going to get rid of foundational tools that everyone uses, but we definitely need to update our understanding of that tool.

It's happened really rapidly and I'm sure it's happened for you, or a tool you've used for a long time, all of a sudden has like a spangly little button that says, try our AI now. And so, and then like that's a piece and it's like deep, deep in the weeds of operationalizing things like principled AI. But now I'm on the, like I'm on the hunt across the organization where I'm like, I saw the button turn on. Like you need to fill out the rubric.

Carrie (19:13)
Little stars.

Peter Bishop (19:30)
Hehehehehe

Stephanie Enders (19:32)
And I don't like to be this little AI sheriff in the organization. But it's not so much about really having a hard conversation. Out of this place is I want to know where it's come from, how it could impact our team, how it could make them more productive, but also making sure we have the checks and balances so that we understand the decisions we already made about a technology tool are still in play or if we do need to escalate it to a different conversation. 

Yeah, we had like three just come up in the last month where I was like, what? This tool that has been used for a long time all of a sudden now has an AI layer. Most of them are generative AI layers. So it's easy for us to get through our rubric, which is not worth shattering. It's like, can they train on the data? Where does the data go? Like, do we know what kind of model it's using? Do we have the ability to opt out? All those things.

It's really the basics, but having to go back and revisit tools and helping everyone understand when that happens has been interesting.

Carrie (20:41)
Well yeah, because you really could ask like an established, you know, an Asana or you know, Monday or I don't know if they use AI, but... you could sneak something in there that is suddenly taking some data and taking a lot of information from your company because you're putting a lot of info into that specific software. Yeah, that's actually very interesting. But you you would hope that these all of these companies are like you said, they already have their frameworks of how to operate. And hopefully the AI is operating within that like it's ethical and it's, you know, it's it's it's working for your customer and there's a reason for it. And they're not just like, ooh, here's some quick little data theft.

Stephanie Enders (21:25)
And I really don't think, like, I love the tools that we have and I don't think everything starts from malicious intent, but I think people bring their personal experience into the professional setting. I think it was just last week, I'm sure you saw the post too, that on LinkedIn people were like, there's a new button in LinkedIn about like training models off your posts. And so I think when it's in social media and you're

personal data, people are used to interacting with those, like opt in, opt out. I think when it's enterprise software at your workplace, it's a lot easier to be like, well, this is already in here. I must be allowed to use it. And so I would just really encourage folks as they see those new opportunities come up and the tools they already use to have a conversation from just a place of curiosity, like how could we use this?

Peter Bishop (22:07)
Yeah.

Carrie (22:07)
Totally.

Stephanie Enders (22:20)
Do we, like what's an experiment we can run before we decide if we're keeping it? Do we have the ability to turn it off? Who should be involved in this decision? Because we're seeing it in all kinds of software that companies use every day. And so again, like going back to that, like what is the AI adoption rate in Canada? That's where I think the numbers are gonna get really fuzzy because there's gonna be a a huge difference between companies...

Carrie (22:44)
Fair.

Stephanie Enders (22:48)
... that are purposefully adopting AI and then companies that are just like casually having it turned on in the tools they already use.

Peter Bishop (22:58)
So interesting. It must be like trying to fight against the tide in some ways. I remember this is going to date me more than any comment I've made so far, but I remember when I was just getting going, there was an agency that refused to use the internet because they were just like, no, this is going to come and go. 

And you know, there's all these risks and they were very worried about it, which makes sense. But then there's also this feeling of like, okay, while you're doing this, is everyone else passing you by or how hard is it going to be to keep employees if you continue to shut them down and there's all these kinds of factors. 

I can imagine it's tough. It's got to be really tough, especially for something like government or quasi-government to the consequences of these decisions are so big. It must be hard to even know who the decision makers are sometimes.

Stephanie Enders (23:47)
Yeah, and I think you've really hit on an interesting point, like the decision is so big. I think based on what you're tasked with doing, that decision always feels big. So I think of like a family business or a small business that has limited resources, these AI tools could open like a huge window of opportunity in terms of like reach and growth and productivity but they might also feel really alone in like not knowing if they're using it right or not understanding the technology or if they use something, let's say from gen ai for marketing copy or a blog post or generative images and they haven't taken the steps to like revise, review, edit, personalize, all the things that... that you still, I think, need to take those steps of using it as a tool, they could be in a really tricky place with their customer because they might unintentionally offend and they might unintentionally deceive, where they're using the tool for speed and efficiency, but maybe haven't built in the steps to figure out how they want to use the tool and protect their company. So to me, that's as big decision as the government leveraging kind of public data for impact that might impact millions, the stakes sometimes always feel high.

Carrie (25:18)
Hmm.

Peter Bishop (25:25)
It makes sense. last question here, Carrie, I know I keep butting in here. I remember watching an interview with the people that had kind of invented the iPhone and they were they were talking about how like they were so excited about the technology. They really didn't think about the social consequences to suddenly now it's OK to have a phone in front of your face while you're having a conversation or sit in bed side by side with your partner with your phones on, hypothetically, that never happens at my house. 

But it's just, they didn't really foresee and they were completely surprised by the consequences of this new technology. it was a fascinating interview. And do you feel like with AI, have we seen the repercussions already or do you think there's way more to come? How do feel we are along that chain of realizing what we've done?

Stephanie Enders (26:15)
So it's like, it's that question of like, we were so focused on if we could, we didn't think of if we should, right? It's an unpacking of that kind of ongoing story. And I think it's like that with any really scientific discovery or technology advancement is that.

Carrie (26:17)
Realizing what we've done. What have we done?

Peter Bishop (26:19)
What have we created?

Stephanie Enders (26:45)
The human drive to can we push our limits to see if we are able to understand this new thinking and this new opportunity versus should we? And I don't know if there's ever right answer to that because I am the beneficiary of many people figuring out if they could do something before they figured out if they should do something.

I think my life is better because of an iPhone. think I have more access to conversations and content and ideas. Do I love all of my habits that I formed around that technology tool? I do think in my own personal practice, I'll date myself here too, I'm just like, I'm at that cusp where I was too old to be in the first wave of kind of college-based social media platforms.

And so, but was an early adopter of them when they moved out of the college campuses. So I am horrified by how long I've been on Twitter slash X and Facebook and Instagram, but I do feel that like professionally for my own practice of thinking of this technology, I really lucked out in where I was on that last big technology shift because

Carrie (27:51)
Hmm.

Stephanie Enders (28:14)
I had the option to opt in, where I think folks who are younger than me, it was really ubiquitous. It was just, it's there. And so I know in my own work, I think deeply about what would have I done differently if I was kind of unleashing this on the world. It's totally a false assumption to think that I know what the impact of AI long term will be.

But I do think when people bring their own experience with the technology adoption they were exposed to and the technology shifts in their lifetime, it makes the next wave of technology shifts more informed. And especially with this group, and there's a lot of emerging talent, technology shifts happened a lot more rapidly in their lifetimes than like our parents' lifetimes and our grandparents' lifetimes. So that cycle of when does new technology come in?

has really shortened. so I think that's probably a benefit to the folks building AI now and who are really at the forefront is they experienced multiple changes in kind of game changing technology. Like I remember when we got the internet at our house, I remember social media, I remember getting an iPhone. And all of those are very, very different technology tools and they kind of got all lumped but they're actually incredibly different. And so.

That's one of the things that I do think is maybe more unique about this shift to rapid AI adoption is that the folks really unmasked investing in the technology, building the technology, have gone through multiple fundamental technology shifts in their very young lifetimes, where the creators in those earlier technology platforms maybe didn't go through so many in rapid succession.

Carrie (30:07)
It's just, it's mind blowing how quickly things are moving and how, but also how quickly we're adapting to how quickly things are moving. And like you see kids these days and I have two nephews and they're, they just know how to use every screen in front of them. Like it's, it's almost like they were born with this knowledge and you know, and then you give a new technology to somebody you know, older. But it's, yeah, our brains just adapt and we move along with it. And that's why I find it really interesting comparing it to an iPhone. That's why I like comparing it to the internet more than like, eventually there's probably going to be a tool or sorry, like a hardware that has this AI that maybe someone could become addicted to, or maybe it could have some kind of social problems. But right now it's like AI is just this ethereal thing.

Wow. You have so many great insights. I'm obsessed.

Stephanie Enders (31:03)
Yeah, think one thing that is still very much at risk and doesn't get talked about is really this kind of, technology divide. We've talked a lot about it in previous technology advancements. We still see that with like internet speed in rural areas.

So there is this conversation around like who has access to the technology and who doesn't and there's lots of different ways we can talk about that in AI. So that's things from like who has access to the infrastructure, who has access to the data, who has access to the tooling, but one thing I think is moving much faster with AI than with previous technology adoption is the understanding of AI literacy skills as fundamental access points for all people. And so that's something that has shifted even in my time here at AMII. 

There's a lot of talk of like upskilling early, like my early time here is like upskilling. You have to think about upskilling people to AI. And I think now we really think about it more as like core AI literacy. So that when we were developing these AI tools, the best way to do that is shared language.

Peter Bishop (32:10)
Hmm.

Stephanie Enders (32:22)
So that folks who are impacted by the tools, who are building the tools, have shared vocabulary because we can have really informed decisions. And that doesn't mean like everyone goes and becomes like a computer science major. It's really around like how do these technologies function? What is the underlying structure? How are they trained? Where does that data come from?

And so we focus a lot on literacy here at EMU. We have a great training team and we've gone from upskilling, thinking around the piece that people bring off, like displaced labor force. We have to think about upskilling. So now we think of it as a true, like, fulsome training pipeline from kindergarten to, like, workforce.

Carrie (33:11)
Wow.

Peter Bishop (33:11)
Love that. You know, it's funny, it reminds me a little bit of because my mom's in her 90s, just about and really do notice the because she hasn't kept up with technology, even simple things like text are not really within her, her era. So she there's so many things that she's missing out on to the point where she's really hard to function in society, you go to like Safeway, and you're expected to be able to navigate these interfaces to pay now.

Stephanie Enders (33:42)
Yeah.

Peter Bishop (33:42)
And they change all the time and you go to the gas station and it's a new interface and you go on your TV and it's a new interface and Netflix has got one and Disney's got one and then your phone's got and even if you have a smartphone, right, you're expected to have a smartphone to do most things these days. So if you haven't kept up, you really are getting, I feel squeezed out of society in so many different ways. And it's interesting how you're talking about that, that literacy. Because again, it reminds me a little bit of...

Stephanie Enders (34:01)
Yeah.

Peter Bishop (34:11)
... any new technology when it comes out is kind of Wild West is everyone's doing everything. They're calling everything their own things and it's really hard to learn it. But as they streamline and some winners emerge and stuff, usually they kind of set the tone and start to standardize things and it becomes easier for the average person to hop onto it. It kind of reminds me of 3D printers, which I don't think ever really streamlined. It still looks like everything was made in someone's garage.

Stephanie Enders (34:31)
Mm-hmm.

Thank

Peter Bishop (34:39)
You know, it's just, it's chaos, right? So, and I, I feel like it's starting to, do you feel like it's starting to standardize a little bit? Like is there, are there ways to learn it and not have to relearn it for every different variation of it?

Stephanie Enders (34:54)
Yes, like I think absolutely there are ways to learn about the technology without being specific to like platform or model. But I do think the field is moving really quickly. And so that piece of being able to identify, interact and evaluate are the course like some of those core skills because it's less around like Are you are you gonna be a Microsoft Co-pilot? Practitioner are you gonna be a Google Gemini practitioner? 

I think the the fundamental literacy scores skills are really on that piece of like being able to identify That you're interacting with AI or you have the opportunity to use a tool to understand when maybe that tool is having an impact or influence over a decision that is impacting you or that you're making, and then ways to advocate for yourself through that process. there's so much under literacy we could cover, but I think it's about finding that balance between really specific skill sets. And guess prompt engineering would be an example of like...

There's really specific interfaces that people will get used to using prompt engineering. I think it's probably going to move away from engineering in some ways. It might be more of like a communication style. There are very technical parts of prompt engineering, but I think the fluidity of being able to interact with AI well is a skill. And that's the skill that's transferable rather than being like, OK, I'm working.

I'm working under a framework from this open source model or, okay, I know that now I'm in this environment and so I'm gonna think about these things. I think it's more those skills of trying to figure out like how to make the most of the tool for the purpose you have at hand.

Carrie (36:56)
Yeah, I like that too, it's that same idea of, yeah, learn how to use an iPad, but also don't just learn how to use an iPad, learn how to use.

these technologies, learn how to use a different type of tablet, learn how to use, like learn how to be able to look at something like this, try it out and figure it out, which is what I think the younger generation has. just like, they just try things. They just start pushing buttons. Whereas if you put that in front of say somebody older who isn't as in touch with technology, they just, they're too scared to touch it. And it's like, it's fine. You just gotta hit some buttons and see what works, see what happens.

Stephanie Enders (37:30)
I think probably like the funniest example of this is we have live chat on the Amii website. It's not a bot, it's the team. And so for a really long time, like our faces show up, like you can see who is answering the live chat on the website. And for a long time, we just kept it as faces and like we would introduce ourselves. So like if someone's on the live chat, I don't answer it as often anymore, which is probably a good thing.

But I'll be like, hey, this is Steph. I'm the chief delivery officer at Amii. Like, how can I help you today? Now I've noticed the team has switched the header of the live chat, which is like really direct. Like, we're real Amii employees. Like you're talking to real people because, well, there's that other assumption. They're like, I'm at Amii. This must be AI. I'm going to interact with the AI. And so we're just really clear. Like you're going to talk to the team and we're real people. So.

Carrie (38:26)
Yeah.

Stephanie Enders (38:28)
We could see that uptick slowly after the release of ChatGBT, where we would just get math problems submitted. And so we wouldn't answer math problems, obviously, in real time. But it was a signal to us that there had been a shift in how people were perceiving things like live chat. They went from the assumption of, this is a place I can go for help to...

Carrie (38:36)
You're kidding.

Stephanie Enders (38:58)
This is a chat bot that I'm interacting with and I can kind of play with technology. So now we're like, we're really direct, we're here to help. It's the Amii team. And people are often shocked.

Carrie (39:07)
straight up human, I swear. But I mean that leads into another thing I like talking about which is AI isn't a solution to every problem. You have to have your problem and then decide if AI is the solution for it. And if it's not, it's not. And to me chatbots are just not it yet. 

Okay, well we only have a couple of minutes left here but... I really just wanted to thank you for coming on and I really I just appreciate your curiosity and your optimism and and I think you have a really valuable outlook on where AI is going and where it's come from. And I was telling Peter I was trying to figure out how we were going to you know do AI at ZGM. And I saw your talk and then I saw Mara on the escalator after the talk and I was like, so how do you make this happen? 

Like I'm trying to create this task force. I'm trying to decide like, how are we going to get AI moving and make sure everybody's on board at ZGM. And you were like, well, you just need to find a Stephanie in your organization. And I was like, well, I think that might be me. And obviously they were like, yeah, it might be you. I was like, shit.

Peter Bishop (40:17)
So easy.

Ha ha ha ha ha ha ha

Carrie (40:22)
 Yeah, it's just been lovely chatting with you.

Peter Bishop (40:27)
Yeah. Yes. Yeah.

Stephanie Enders (40:28)
Well, thanks for having me. Good.

Carrie (40:29)
Yeah, that went so smoothly. It was awesome.

Stephanie Enders (40:35)
So, Upper Bound 2025 is coming May 20th to 23rd. And it's really this exploration of the intersection of academia, industry application and emerging talent. We're expecting thousands of practitioners to come from all over the world. We have 16 themes and some of the ones I'm really excited for for this upcoming year is an expanded AI safety and ethics theme. So those pieces around guardrails really leaning into both sides of that conversation. 

We also have AI for critical infrastructure, which is talking about that intersection of like computing infrastructure, cybersecurity, and then the critical infrastructure we all rely on here in Canada, things like water systems, food systems, travel and transportation. And then an expanded AI education and literacy. 

So we run a large a pilot for K to 12 focused on teacher professional development. And so we're expecting a lot of teachers to come and join us. We're exploring the implications and the opportunities for AI in the classroom, while also building up teachers understanding of the technology. So I hope you'll join us, upperbound.ai, and tickets are on sale now.

Peter Bishop (41:46)
Yeah, I'm looking forward to it. Thanks again, Stephanie. Have a great rest of the day and really thanks for being on the podcast. This is awesome.

Carrie (41:53)
Woo! Bye!

Peter Bishop (41:53)
Bye.

Peter Bishop (42:06)
That was awesome.

Stephanie's so articulate and just so smart. I actually find it must be super exciting to be around a whole company that's just devoted to that topic versus, you know, creating a little team or anything inside a company. It's just, it must be invigorating being around people who are just all in that.

Carrie (42:27)
Yeah, it's like having an AI task force for an AI company. That's like next level meta AI solutions.

Peter Bishop (42:32)
Right. Yeah. Yeah. There's a couple of little nuggets in there. I really liked one was, certainly the one around, I was trying to get at, like, again, everyone's always saying how Alberta is behind and Canada is behind, but it's just, are you, how are you measuring that?

And I feels like the answer is a little bit of, we're not actually sure, right? There's lots of different ways companies are getting into AI and there's probably lots of companies with people using it or companies getting dragged along with their day-to-day software could just going into AI. Either that number is going to jump really fast or it's not completely accurate.

Carrie (43:12)
Yeah, I feel like it's fuzzy. But I mean, it is still a low number. Like that six was still, you know, whether that's fuzzy up to like 10 % or 15%, that still seems quite low to me. But yeah, hopefully Alberta plays a little catch up, which I feel like Amii is doing the hard work to make sure that that happens.

Peter Bishop (43:33)
Yeah, I thought that was good. And there are there anything that stuck out for you?

Carrie (43:39)
Yeah, I mean, I really appreciate her outlook on it. It always comes back to this like, okay, it's just a tool. There is no inherent good or bad with it. It's all what the humans put into it and what humans are capable of like the fact that like, she's like, we're in control. Like, we're in control of this tool, we created this tool. And it's all gonna be okay. I really...

Peter Bishop (44:01)
Yeah.

Carrie (44:04)
Yeah, I try to live by that and sometimes I get carried away in my head, but

Peter Bishop (44:05)
Well, and even that you're I think you brought this up, but the fact that AI is a massive term. So saying AI is bad or good is like, I don't know, it's just saying like, technology is bad or good is you have to unpack it or get smaller to actually do some of it's helpful. Some of it's not some of it's used nefariously some of it's not. just it's pretty big blanket statement to say it's, it's anything.

Carrie (44:33)
Yeah, totally. I totally agree. And I mean, I think my other takeaway is that I didn't even ask her the three questions I'd written down that I wanted to ask her, because the conversation was just slowing so much so naturally. But we're definitely going to send some links out. We have the link to the upper bound talk that I was taught that we mentioned. And we'll link you off to the Amii website. So you can discover all the good things they're doing.

Peter Bishop (44:49)
Yeah.

Yeah.

Yeah, and hopefully we'll get it back again and we can kind of do a part two because I feel like there's so much more there.

Carrie (45:05)
yeah, and I mean if we talked for a year, who knows what's going to be happening. It's going be madness.

Peter Bishop (45:10)
Right. It will just be three robots that will be surrogates.

Carrie (45:15)
I was gonna say, we were talking about the Singularity, I was like, we were talking about names for this podcast being like, the Singularity Watch and stuff like that. And it's just so funny looking back on that now. She's like, that's not, I don't know if that's ever gonna happen. I was like, okay, okay.

Peter Bishop (45:23)
No.

Yeah, it's funny because she was talking about their chat and having to tell people that they're real, they're real, like, they've got their, you know, their actual photos on it. And then like literally two days ago, someone showed me a tech support. Wasn't a tech support. It was a call in support that is all AI driven. That sounds like real, real people. You can ask them anything and they'll book appointments and query you about what's wrong. And it's just, it sounds, I get why people are, are wondering. No matter what, when they're on any sort of like chat or any sort of IT support.

Carrie (46:05)
Yeah, my first assumption is I'm talking to a chatbot until proven wrong. You are guilty until proven innocent.

Peter Bishop (46:10)
Ha ha ha ha.

Yeah,

Okay, well on that note, that was good. Okay, I'll talk to you later.

Carrie (46:19)
Yeah, that great.

Okay,

Okay, bye Peter, nice chatting.

Peter Bishop (46:27)
Yeah, see ya.


People on this episode