3 issues I had with AI this week

This week was totally normal and routine for me, I’m sitting here on a Friday thinking that it’s been the same as many other weeks I’ve had in my life as a Tech Seller at Red Hat. Enjoyable, busy, varied, etc. However, I am noticing a change in how I go about my week - that change is that more and more of my colleagues are using AI.

On the one hand this is fantastic, the democratisation of AI, led by services like ChatGPT mean that anyone without any technical background or grounding in the principles of AI doesn’t have to worry about HOW any of those services work, they use use them.

These same colleagues use search engines, without knowing how the inner workings really work (heck, I barely know anything about how search engines work!), and I wouldn’t expect or assume that they want to invest time in learning the HOW there either. AI is not different in this respect - AI is just a tool, and they just want the tool to work.

However, in this week I’ve noticed that 3 times, that these AI tools just didn’t work in a way that my colleagues /thought/ they would work. That is absolutely no negative reflection on my colleagues, but rather an interesting observation about the dangers of not understanding the implications of just blindly accepting AI services.

Meeting notes being sent into the void

We all have a lot of meetings, all day, every day, endless meetings. You know what we don’t have - good meeting notes. So much of our time is wound up not remembering actions, or previous discussions because taking good notes is hard, and takes time. Very few people want to be the note-taker, I get it!

There are AI services now that will join your meeting, listen to the audio, take a text transcript, and summarise that text transcript as a form of meeting notes. I am absolutely blown away that this technology exists, it is incredible, and the quality is amazing.

The problem? That AI service now has a permanent record of our meeting, of every word that was said, of any confidential numbers, or statements, or comments. That AI now has access to use that as data, as input to it’s future learning and training and sharing.

That data could be exposed in entirely unpredictable ways, or the service could be hacked, or quite simply the AI could mis-represent or mis-understand a point in the meeting.

The point could be made that AI generated meeting notes are certainly better than no meeting notes, I’m inclined to agree, but the hidden cost of exporting potentially sensitive data is a cost that is too high.

An out of date meeting briefing

As much as we hate taking good meeting notes, everyone hates preparing for meetings as well. I can’t imagine how frustrating it must be as a senior leader to have to constantly context switch between different partners, departments, stakeholders and customers - there’s no way anybody can have all those high profile meetings and be well prepared. I empathise with that.

However, a senior leader this week was giving a presentation, and the preparation for that presentation was a quick conversation with ChatGPT the night before. I totally get why this would seem like a way to get a quick set of notes and talking points, but here’s the problem - AI is trained at a particular point of time, only on publicly available data. In this case, it’s data was 3-years out of date.

That AI could not possibly prepare that Senior leader with the new partnership announcements, it misrepresented some offerings and services as being “new” when they were years old, and it didn’t understand the presentation and salient talking points that we would normally talk about.

The presentation came off generally fine if you didn’t know some of the detail, but the lesson to learn here is that models are trained on data set at a point in time, and only on publicly available data. While they can be useful to help prepare some foundational points, using it as the sole source of knowledge is dangerous.

A confusing set of technical steps

Lastly, in yet another meeting, a colleague needed to summarise a series of complex technical steps. These steps were not documented for easy copy and paste, so again, ChatGPT to the rescue. While ChatGPT made a valiant attempt at summarising the steps, these were blindly copied and pasted to the customer without any checking.

The results were a confused customer, nothing major - but again, this colleague took for a result at face value, without taking a bit of extra time to check the work. It’s not laziness, it just looks “good enough” at a glance, and it’s only when you review the result that the problem stares you in the face.

The last lesson to learn here, is just check the AI’s work. It’s fallible - not perfect. AI can create some pretty impressive answers to our questions, but let’s not take those answers at face value.

Summary

I started off this blog post talking about how it’s been a very normal week. I’m quite certain that AI is getting used by almost everybody in every company today. Just be careful, at taking those answers at face value!