On the reg

If AI is the hammer, do we have the right nails?

Season 6 Episode 81

Can't be bothered with email or speak pipe? Text us!

Jason has been living his best life, travelling NZ (where he did not touch boiling mud, as much as he was tempted to) and having excuses to eat pepper steak pies. Meanwhile Inger has been having a shit time at work and waiting for her puppy to arrive.

After a confession during the mailbag segment, we hear all about Jason's star keynote at the ASPIRE conference, which led to... well, a bit of a rant on Inger's part at least. You get the picture!

Things we mentioned:

Got thoughts and feel pinions? Want to ask a question? You can email us on <pod@ontheregteam.com>

- Leave us a message on www.speakpipe.com/thesiswhisperer.

- See our workshop catalogue on www.ontheregteam.com. You can book us via emailing Jason at enquiries@ontheregteam.com

- Subscribe to the free, monthly Two Minute Tips newsletter here (scroll down to enter your email address)

- We're on BlueSky as @drjd and @thesiswhisperer (but don't expect to hear back from Jason, he's still mostly on a Socials break).

- Read Inger's stuff on www.thesiswhisperer.com.

- If you want to support our work, you can sign up to be a 'Riding the Bus' member for just $2 a month, via our On The Reg Ko-Fi site




I think there's trying to write hands on. Not handsome, do you reckon? Oh, maybe. Yeah. Yeah. No, I, I take the handsome and run with it. I still think we should put that word cloud on a brochure somewhere. Like what are our unique selling features? Handsome. Nice smile.

That's the bit that really made me laugh. Alright, uh, you're in charge. Righto. Okay. Driving the bus. Welcome to On the Reg, I'm Professor Inin from the Australian National University, but I'm better known as Thesis Whisperer on the internet.

And I'm here with my good friend Dr. Jason Downs for another episode of On the Reg where we talk about work. But you know, not in a boring way, practical implementable productivity hacks to help you live a more balanced life. And of course, Jason is one of the co-directors along with myself of on the reg team.

And today we are gonna talk about, well, AI as usual. And, , this time with more [00:01:00] heer, I believe. Jason? Yes. Yeah. With he dead German philosophers. On on Okay. With Suspect Nazi connections, but okay. Yeah. Okay. , Separate the artist from the art. That's it today. Yes. , And he's gonna recap his star turn at the Holmes Glen TAFE Staff Conference.

Did I get that right? Yep. Yeah. The, the Aspire Conference. HomeLink say Home Splint, the Aspire Conference. You were aspiring it and you sent me pictures of the audience and there were a lot of people there. Yeah. Though, , I had the kind of, the main stage bit for the workshops that I was running. Oh. The main stage.

Main stage. And yeah, it was like a couple of hundred people for each session would turn up. Very nice. Very nice. Yeah. It was great. It was great. Looked impressive. , So how you been since we last caught up? Actually really good. I've been to New Zealand. Ah, I know. I was jealous. I got photos and I was jealous.

Amazing. I've [00:02:00] never, I've never been to New Zealand before. , So we went over and to do some work with a wonderful folk at Massey Massey University. Hi Naomi, John, everyone. Um, amazing, amazing country. Great people. People at Massey were awesome. , We took a couple of weeks I went over with Cath and , the first week we just toured around the north island there.

Uh, and got to, you know, experience some of New Zealand's North Island weather was mostly pretty good. , Which was great. And then in the second week I worked with Massey for that full week and did a mixture of, you know, our kind of our traditional workshop stuff that we do with PhD students and ECR and that sort of stuff around research projects and writing under pressure and those sorts of things.

Uh, ran a strategic. Workshop, um, at the end of that. And then at the front end of that we did a whole bunch of work with, uh, curriculum mapping and using AI to be able to do all of that, and [00:03:00] also supporting professional accreditation and that sort of stuff.

So it was like, um, it was amazing work, right? Got to do some really cool stuff. And the, these new, like these technologies just the use cases for them, , they just present themselves and, you know, when it comes to doing things like curriculum mapping and professional accreditation and that sort of stuff to, they, they're very long, big documents that you have to work with.

And that's what these large language models are really good at. The amount of time they save in something like benchmarking exercise is Yeah. Is truly extraordinary. I, I just now fire up, fire it up and make a cup of tea and watch it, do all my desktop research on what other universities are doing, which seems to be a constant thing we're asked to do, you know, what are other people doing?

And it's actually just very tedious locating all the websites and clicking on them and, you know, and it can just give you a, a straight up worksheet with all the URLs and you can just go check them and it's great. Yeah, yeah. Yeah. Very good. Very good. New Zealand. I found [00:04:00] both, um, hilarious and terrifying at the same time.

Uh, they've got this very adult way of dealing with people, , especially with their signage. So we went to a place called Hell's Gate, which is a rotor and essentially bubbling mud pools of, you know, super hot geo themic activity. And they had a sign there that silver said that effectively said, don't touch.

If you touch this, you'll die. You know, words to that effect. Maybe not quite a blunt. Right, right. Next to the thing that if you touched it, you would burn yourself. Right. And that thing was right next to the path that you were standing on and there was no BAAs Strait. There was no, yeah. So you could easily touch it.

There was a bit of me that was like, oh, it's just there. I could reach that and touch that. Maybe it's not as hot as they say. It's, turns out I didn't touch it. There must be some proportion of people a year that do it though, right? Like Right. And I think the New Zealanders, I think like, I [00:05:00] think their approach to this is we warned you.

Like the rest is on you. Right. But like, we did our job, , we went for a, a hike one day and we got to the bottom of this trail and there was this little sign there that said something along the lines of maybe steeped and exposed in places. And this is an after I'd been to Hells Gate.

Right. So I should have known better. And I, we kind of looked at that and I also said something like, you know, you know, look after children or something, you know, words to that effect. Be careful with children. Sure. Yeah. Inga, it was a fucking mountain goat track on the side of a cliff. Right. And it felt with like, it was like a 300 meter drop straight down.

And all the way we were going up there, storm came in, we were being like, it was storm level winds. It was raining. Everything was slippery. I genuinely thought I was gonna die. Um, I'm scared of heights and I was clinging to the face of [00:06:00] this mountain, like with like death grip, hoping that I wasn't gonna fall off or get blown off the side of this thing.

Like there is no way you would take a child up the side of a mountain like that. Right. Like, there's one they do though, Jason, maybe they're just like hardy New Zealand children. Well, that's what I reckon, right? Like they're just super tough. Like, and I, I'm all respect, right? Yeah. Oh sure. And look, if you were dying there, it was a very picturesque way to go.

'cause it is stunningly beautiful everywhere you go there. It's just stunning. Yeah. It's kind of like a screensaver every time you turn around it's like, do you know what I mean? It's, that is so true. Screensaver country. Absolutely. Yeah. Yeah, it is. Uh, but yeah, anyway, New Zealand, , lots of love for the country.

Uh, I hope to get back at some point. Uh, see the South island, you know, if I get the opportunity, Naomi, if you're listening, like invite us back, you know, we'd love to come back. Um, and you know, like anyone else, like there are other universities there as well, so like, if you wanna [00:07:00] invite us over, I'm all for it.

Yeah. Hint, hint. Like, no, no pressure. Yeah. Hint, hint. Yeah. Yeah. Uh, Jack's away still at that school for student leadership. Uh, he's back in three weeks. Gotta admit we're missing. Excellent little little fella. Um, uh, it's that, it's a quiet, it's a way quieter house, uh, when he's not around. Uh, but also the positive of this is that, you know, I've said this before, we share a, like an a Apple music account in their family.

Yeah. And so when I ask the magic lady to play me some music that I love, often I'll get this urban rap stuff that Jack listens to, which I think is crap. But, but yeah, like. She the, you know, magic lady in the speaker thinks that that's my favorite music when it's in fact his favorite music. So when I say play me my favorite music, I get all this stuff.

Anyway, I've managed to scrub that from the algorithm while he is been away. So it's been, so [00:08:00] that's, it's been an unintended, um, benefit of having him away. It's actually, uh, it's actually a contemporary parenting and family issue to get the algorithms to understand who they're talking to because, um, yeah, I've got the same problem with chat GPT.

Yeah. Because Brendan uses my account for, you know, his homework and of course prescribed and, and I'm sure completely fine ways. Yeah. Um, but I don't, he's got different needs to me. Yes. And it starts to, you know, no, because chat GPT now remembers who you are. And I've lately told him that when he talks to it, he has to say hello, it's Brendan.

And the other day I asked it, you know, what do you know about my personal life in front of a bunch of students just to demonstrate the privacy nightmare that is Jet GBG now. And um, and it said, oh, you have a son called Brendan. Oh dang. Because Spookily, it had picked up that Brendan sometimes talks to it and it separated out What?

And I asked it about Brendan and me and it separated out our different uses. Wow. Sy [00:09:00] wow. Yeah. Yeah. That kind of, I mean, anyway, just algorithm management. Mm. Yeah. Hey, I discovered the second best pepper steak pie in the world. Um, also important, hashtag important, also important at the Warburton Bakery. Um, Warton.

Warton. Get it right. Jason. Warton. Wabu. Warburton. Sorry, not Warburton. It's Warburton is how It's war. No it's not. It's Wilberton. Hey, Victorian Privilege. Warton. Warton. Okay. Warton Bakery. So, as you know, I've been on well over, it's been well over a decade now that I've just, when I go to a bakery, I exclusively eats pepper steak pies.

Sure, sure. Like it won't be any other kind. Yeah. I mean, they're not worth it. Like you've gotta also have a baseline for comparison. Correct. And so, like, I take this very, very seriously and so we went up to see Jack and um, we went to the bakery there and I ordered my pepper, steak pie blow me over. Second best one I've ever eaten.

[00:10:00] It was amazing. Um, well that's a good bonus. A bonus for going up there. You, you due for another visit? Yeah, we've gotta go pick him up. So I, okay. You know, I've already put it into the agenda for the day. It's like, you know, visit bakery, pick up kid, visit bakery, come home.

What about you? That's an exciting group of things, that it's a hard act to follow. Uh, um, so everything's still pretty shit at a NU Thanks for asking. Yes, because I keep telling everyone, I mean, everyone's sick of it.

It's, it's shit. So, but apparently there's gonna be no more forced redundancies. That was a big announcement. Okay. I think we're all a bit gun shy 'cause we're all like, yeah. Right. Okay. We'll see. I mean, it was very welcome news. But in true a NU management style, it was delivered really, really badly. So good.

It arrived as some sort of long, hard to interpret [00:11:00] email, um, and it said, you know, there'll be no more redundancies. Failing to mention that the redundancies already on the table were still gonna be done or considered. So everyone who was under threat of a redundancy thought they had a reprieve and it was so unclear for a couple of hours.

And then the vice chancellor went on the radio and clarified it. And then of course, you know, the bush telegraph fires up and people who'd actually listened to the radio, the local A BC radio clarified it first and all the text chats were lighting up and it was just, it was, uh, in a word, breathtakingly awful.

No, that was three words. Breathtakingly awful. Poor form. Um, a your communications team, I've only got one thing to say to you. Do better. Yeah. Just do better. Yeah. And, um, I'm begging you to use chat GPT because not only would you write better and clearer emails, if you talk to it [00:12:00] about your communication needs, it would probably suggest a better way to go about everything.

Just so like, you know, talk about snatching defeat from the jaws of victory. Right. Yeah. It could have been quite a good moment for our vice chancellor and yeah. I'm pointing the finger. Could have been a good moment. Yeah. Totally flubbed it, let down by those people doing that work. I just dunno, I can't, I just dunno what to say.

Just, yeah. Anyway, so, you know, the text chats have been popping off, I can imagine. Um, I would say the happiness level is not any better and I've just spent three days on campus. I don't normally spend that many days in a row. 'cause to be honest, just running into people who are losing their jobs and people who are mad.

It's just such a downer. And try and preserve my kind of emotional wellbeing in order to actually bring my good self to teaching and all that sort of stuff's been really hard. So I've had three days, so almost today [00:13:00] I just needed to detox and sit at home with a wet flannel over my head. Like, honestly.

Anyway. So no more redundancies? Maybe? Probably. Who knows? Yeah. At the, at the moment.

Just a quick editor's note. Listeners, , Inga here as I'm editing the pod, , subsequent to all of this kerfuffle at a NU, the Vice Chancellor resigned. Uh, we've got a new interim vice Chancellor and one of the first things she did was announce that, uh, the redundancies are now not going to go ahead, so she's undone all that needless cruelty, which is a good first step.

So I just thought I'd catch you up there. Okay. Carry on.

Anyway, better news I've been traveling for on the wreck. Yeah. So I've been up and up and down the Melbourne Canberra Road a couple times in the yellow tiger. Now the suspension's been fixed and um, I got, I got to look after Ginger cat while you were in New Zealand.

Yeah. And we bonded Jason. Dang. Yes. Me and ginger [00:14:00] cat. She's a misunderstood creature. Kathy's right. The first day I got in there, I got in quite late at night. Yeah. And I turned on the light and she kind of looked like, what the hell's going on? This, like, cat's just sitting there in the dark. And um, she just, hi.

She just hissed at me, like, yeah. And then I walked past her and she took a swipe of me and I thought, this, this isn't gonna go well. I thought, but then I fed her and um, and then she started love bombing me and then she alternated between hissing and having a go at me and love bombing me for about three days.

And then, um, after that we are great mates. Right? The cat like this. Yeah. There's a glitch. That's all I'm saying. She's not quite right in the head, but we love her anyway. You're not quite right. And um, Kath sometimes sends me photos of her. Like today I've, I pulled out a jumper that I had at your house and I had ginger hair all over the back face.

Oh yeah. And I went, oh, I miss her [00:15:00] fluffy little face. So I sent a, sent a text to Catherine and she sent me some ginger pictures. So, you know, I get my ginger fix. God a bloody cat is everywhere in this house. Just, I know She's pretty shed. Yeah, she's a shedder. Yeah, yeah, yeah. Um, I did not enjoy, I've gotta say changing kitty litter, so I will be not getting a cat.

So, um, so there's that. I just, like, I, I'm already like, internally I'm laughing because, you know, you, you want to get a dog and all I'm saying is that bigger and less contained, that's all I'm saying. Right? No, see, so I'll skip to the last item on my catch up agenda here. People have been asking me for a puppy update, so I'll give you one.

Okay. So that the kind of dog I wanna get does not shed like, right. This is its selling feature. Once it goes past the puppy coat, it never sheds again. You have to brush it a lot and do a lot of grooming because of that. Yeah. Right. And [00:16:00] you have to cut it, but it doesn't shed, so this, like, this is one of the reasons I've been holding out and it's a bit of a saga how long I've been on this waiting list.

Um, because I want a very particular breed, Jason, as I think I've mentioned, uh, a ton Theier. Mm-hmm. And I had to look that up on YouTube to get a pronunciation video, which I will share so that people can So Katon. Um, and it's a variant of Bichon freeze and they're quite rare. Um, there's only one breeder really, in Australia, who's Michelle up in Bundaberg at Cotton Run.

Um, and she rang me the other week and I could have had a puppy that was born this week, Jason. Oh. So I could have had a puppy from that litter. Um, but then it would've arrived at my house just before I went to the UK in Yeah. December, which I've just committed to doing. And I wanna be a good puppy parent.

I don't wanna be an absent parent. Right. So I wanna like, especially 'cause Luke is like, this is your problem, your dog. Yeah. So you better do the work. [00:17:00] Yeah, yeah, yeah. So, um, yeah. So Michelle is saved me the first pup, first female puppy from the next lit litter. Um, and mama is Blossom. I don't know who Father will be. Um, uh, but Blossom is due to deliver at the end of November, so we should get her at the end of January. So I'm gonna include a link to Blossom's homepage 'cause Blossom has a page so you can see how adorable, ridiculously fluffy God. So, so dog hair hopefully will not be an issue.

Hopefully not. That's, it's, it, it wasn't the dog hair that I was re referring to, like, you know, you said you didn't, you didn't like changing the litter and so you won't be getting a cat. Oh yes. I'm just saying dogs are bigger. They're picking up the poo poo. Yes. And they, they don't, not in like a little, like a little sandbox thing.

Like they don't just go in one spot. Yeah. The dogs, they just, wherever they can. Yeah, yeah, yeah, yeah, yeah. All the best. Well, you know, [00:18:00] I've actually been looking to the technology for that Jason. Yes. There's technology. I'm sure there is. Yes. Stay tuned. Anyway. That's right. I was in Melbourne and we were at, um, Melbourne Uni together doing some writing, teaching.

Yep. And that's maybe the first time we've done a whole day of like. Deep grammar nerdery together, Jason? Yeah, I think that went, that went well. Then I did rich academic, poor academic, but I'm renaming it. I think what I really delivered for them was, um, leveraging AI in your research process. So I've renamed the workshop that Yeah, because I've drifted so far from the book that Noelle and I wrote that.

It doesn't seem fair to give it that name anymore. Yeah. But, um, I caught up with my friends Mark and Christian. Mark sometimes listens. So, hello Mark, if you're listening and friend of the Po Coralie who says hello Jason. And we had a very lovely dinner. Hi Coral. Uh, yeah. And we, we ate food and we drank wine and had a laugh about something that happened to her at work so that we had a good time.

[00:19:00] And, um, a big shout out to Phil, who is, uh, is I think the head of the school of nursing and midwifery there. Anyway, he was the organizer for this event and he tells me that he listens to us, Jason on the drive. He lives in Sydney and he, he works at Newcastle. So he has like a one and a half hour, two hour drive to work.

Um, and he says that he listens to us on double speed. On double speed. Oh, okay. Great. Yes. So, so he said it was a bit weird to hear me on not double speed. So guess he's used to hearing me talk like this or something. I dunno. Anyway, so, yeah, so that was good. Anyway, I, I just, I love, I love Newcastle. I love staying at Little National.

I love that they have car charges. I love everything about it. Please ask me again anytime I'm there. You're gonna have to, um, pry Newcastle from your, , cold dead hands, aren't I right? Yeah. If you wanna go, you're gonna have to fight me 'cause I, yeah. Love it. Love it. [00:20:00] It's like all the good bits of Sydney without the Sydney.

Yeah. Okay. You know the, it's awesome beaches, like the most amazing beaches you've ever seen in your life at Newcastle. Incredible. Yeah, yeah, yeah. Cool. Anyway, yeah, that was good. So you are uh, in charge? I'm driving the bus. I'm driving the bus. You are? Yes. You such, this is why I went such a super job right now.

Okay. Well you can't see me at the moment because my video's turned off, but I was giving you meaningful stares. I was like, well, you and I was just totally missing them because your internet is so crap. I can see's a j in the middle of the screen. It's lucky that we've talked to each other on the phone for so many years.

Like it's, yeah. I should probably just turn my own, video off. Okay, great. So this is the mailbag and,, we love hearing from you all and this is our chance to share the interesting things that our listeners share with us. So I think we've successfully transitioned to our new email address 'cause we have so much mail we can't get through it, but if you wanna mail us, we pod at on the reg team or one [00:21:00] word.com and we'll make sure that your email makes it to our mailbag.

And we are working our way through our mailbag. Like we just, we love how much we get. Keep it coming. Don't stop. Just 'cause the mailbags full. Um, because sometimes we have a little hiatus and then we catch up and then we have to beg for more mails. So keep it coming. , We'll get through as many as we can today.

But before we start, Jason, I've got a confession to make. Oh yes. Confession, confess away. Um, so you know how it, you know how at the end of the episode we say, we'll read our pod reviews and you know how I, I scramble to read them and then I say, we haven't got any, you know how I do that? Y yes, I do. Yeah.

Every episode. Yeah. So, so how long have we done this podcast for now, Jason? Five years. Uh, we're up to episode 81. We are in season six. So we've been at this for, you know, coming on up to six years, quite a while. Yeah, sure. Okay. Sure. So for, for the first time in ages, I had a little bit of time this morning to prepare.[00:22:00] 

For the pods. I thought I'm, I'm gonna actually get that section ready. So we get there, we've got the reviews. So I get there and I'm like, oh, no reviews. And then the corner of my mind, a little tickle comes out that says, Hey, someone on another podcast who was reading out their reviews the other day said, oh, I just found the overseas reviews and they weren't visible in my version of the pod.

And I like my little me computer filed that away. Right? And so I thought, huh. So I asked Google and Google's like, yeah, you have to do this and this and this. And I'm like, really? Okay. And then so you have to, there's like, not to bore you with detail, but so you get through to the Apple podcast, there's like a little secret doorway, right.

That you have to go through. And I very rarely do it unless I've got something technical that needs to happen with the blog. 'cause brother Sprout handles it all. Right, right, right. But sometimes I go in there and so I went in there and then I couldn't find what Gemini slash Google told me to do. So I went to Chatty, went, so, Hey [00:23:00] Chatty, you got any tips on how to do this?

Of course, chatty knew exactly how to do it, open this, then open this, then open this. What I discovered from that, Jason, was that we have so many reviews Oh, do we from other countries? It's embarrassing. Oh no. And the way that, um, apple indexes them is you've gotta select a country from the 170 country odd lists.

Yeah. Like each one separately. Oh, good. To see them. So, right. I can tell you I started at the as. We haven't got any from Antigua or, you know, um, Albania, no listeners there. But send, I had a quick scan through and I thought maybe we've got some from the uk. Yes, we do. We've got heaps. The US we've got three from the US and lovely ones like lovely, lovely reviews.

So some of them go back to 2021. Excellent. So I suggest it's a little project for us as a thank you to those people. Yeah. [00:24:00] To read them out. Not, we won't start this, like, we will do like a bit of a readout at the start of the mailbag and a thank you. Right. Especially to those people. Okay. Are you, are you proposing, how do we track all this so that we don't read the same ones more than once or, well, I've gotta sit down and actually put them all into a spreadsheet and like, there's a bit of work to do, a bit of produce work, Jason, that I have to do to make it organized.

But I will do that and then we can start reading them out. Yeah. Okay. I have like my business systems hat on at the moment and I'm thinking this sounds. Terrible as as a, yeah, as a way of managing that. It's not the best system in the world. Can I just say Apple? Apple Podcasts do better. Can't, can't they, we can't export like easily something, something you can easily export to a CSV sheet, but only as I can tell one country at a time.

Mind you, I gave it 10 minutes before I [00:25:00] gave up. I didn't ask Chatt again how to do it. Yeah, and maybe there's, I'm missing a trick there, so like I just save it. Time is all I'm saying. Hopefully I can just download all onto one spreadsheet and then, you know, we can keep a record and then we can see which ones we've got, which ones talked about, which ones we haven't.

Just saying I just need a little time Jason to like do it properly. Great. And I think it's worth it after all these years, all these years of shaming people for not writing reviews, where's their reviews? And all this time they're like, who the fuck are these people? They're so mean. They never read out my review.

Why should I bother?

I'm just saying thank you. Thank you Peter. I know

we are honestly if I stumbled on professionals, right? Okay. It's such a so look, I know Jonathan's gonna write another letter. [00:26:00] Isn't. Yeah, yeah. About us. Yeah, yeah, yeah. Maybe even a review. And this is why we deserve it, Jonathan. This is, can I just say we deserve it? This is why I think Martin Emo has been writing to us by email now.

Like he's just stopped putting reviews up because, because we can't see the New Zealand ones, right? Like just, no, I wrote a review. I You can't see it. Yeah. And you've been telling me this on and off and I'm like, huh. No, I can't see it. Did I do further investigation? No. No I did not. No, because Anyway.

Alright. Luckily, luckily, luckily we run our business so much better than we run our podcast. I think it's time that we took our business processes, which are sharp and start applying more of them to the bottom. I'm just saying like, you know, still a bit high fd. It's a fun step. Oh dang. We need, right, I'm gonna play Speak Pipe.

I think we need pe I think we need people. Inga like, I'm just saying, I think we need, we do need people. We can't afford people yet. Okay. I'm gonna play the speak pipe. Hi, Jason Linga. Um, love the [00:27:00] podcast. I just wanted to send a quick message, um, because I just wanted to.

Affirm, no, that's not the word. Validate the feelings that Jason had around closing an email chain, um, or closing the loop on an email. Um, I do feel a bit of maybe internal pressure to respond to those emails. Oh, it's, and, um, I can spend way too long agonizing over what to write, um, in response to close things.

Um, and I feel like a game changer really has been sending reactions like on Microsoft 3 6, 5, or using Outlook, just sending a reaction, like a thumbs up or a love heart actually, um, has been a game changer. It's been really, really good. Um, so I mean, I am still working towards that ideal of, you know, in response to, did you get my email saying, well, did you send it?

Um, but I don't, I think that's a, a level of [00:28:00] courage that I, um, maybe enlightenment that I may not achieve in this lifetime. Uh, love the podcast. Can't wait to hear the next one. All the best. Oh, that's great. No, that's awesome. That was from, from Lily. Yeah. Um, and so yeah, I thought you'd like that one. Jason, you feel affirm, I feel affirmed is the right word.

You're affirmed. I, yes. And you needing to have closure to email. I, I got into a, um, I got into one of this conversations just the other day with, , a potential client and, , they also, I feel has the need to, and so there was, it was probably four or five emails longer between us that just didn't need to be there.

But that's okay. Like, I'm like, no, you hang up first. No, you hang up.

Um, I would love to be able to use the, , like the reactions and stuff, but, uh, we use, uh, Gmail for our, , for our work. Mm. And, , yeah, and it just doesn't have all of that stuff. [00:29:00] So it's kind of, hasn't changed in the like 15 years really.

Gmail, it, it hasn't kind of updated. It's just kind of still the same. I mean, other than the, um, emojis that you can add and like those reactions that Lily was talking about. I mean, it does its job. Like, it's pretty basic, but it bloody does its job well. Yeah. I mean that's why, like why if it ain't broke, why fix it?

Yeah. It's like, but Google definitely under investment in the email, but I think they just, maybe they just got it right and they were like, why change it? Yeah. It's much trouble Too many clients can imagine how many millions and billions of accounts they have. Oh yeah. On gmail. Must be pretty crazy. All right, well thank you Lily.

Thank you for affirming Jason's life choices. We've got a brilliant letter here from Mary. No, hang on. This is you. 'cause it's pink. Yes. Yes. Um, from, from Mary in Nottingham. No. Um, and Mary had just finished their PhD Viva [00:30:00] and she wanted to share thoughts on Gantt chart discussions. Uh, so you remember we talked a little bit about Gantt charts, I think maybe in the couple episodes ago.

, Mary's advice to students. Did we talk a little or did we talk a lot? Did we get on a Gantt chart rant? I can't remember. We did a, we did a bit and we did say that we could Right. Potentially do a whole episode on Gant charts if we wanted to. Um, but we kind of, it's okay. We stopped ourselves, Mary. Too far.

We, yeah. Right. We did. We were like before we, we, before we got too far Control. Mary Mary's advice to students asking about those Gant charts. Simply she says, don't. Right. Like, um, yes, Mary? Yes. Recommend, uh. Recommend printing out a paper calendar, dividing it into equal blocks for each, , like work package color, coding them, and then just getting on with it.

, And then not worrying about the perfect breakdown, , from that. So one of her biggest re realizations was discovering there aren't actually 365 working days in a year. Yes, [00:31:00] true. Weekends, holidays, and life events, they found only about 217 possible work days. And so Mary's Gantt charts became useless at the moment.

She created them two detailed for quick reference, two broad for daily Direction. The fundamental problem, as Mary puts it, is, , how do you break down tasks you've never done before? , And she argues that people don't know what they don't know, which is why Gantt charts work best when you already understand the project. And like, that's pretty, like, that's a good way to think about it if you can't actually predict with any sort of degree of certainty about how long a particular task is gonna take, then drawing it out on a Gantt chart, it's not gonna be helpful to you. . I gr totally agree. Like my workmate doster, Lindsay Hogan and I, we did a project management workshop on Tuesday all day. Um, we, we both wore our lady Trady outfits for it. So we both wore overalls in honor of getting the work done. Yeah, beautiful. And, um, and Holly Noble, regular listener gave you a book.

Holly was there, so shout out to Holly. Hi Holly. And, [00:32:00] um, we had a, we had a lively discussion about not knowing what you, you know, do you dunno what you don't know. And we talked about, um, uh, Ben Berg's, you know, how big things get done. His idea of reference class forecasting about finding someone who's done a project like that before and asking them about their time and their breakdown and stuff.

That's sort probably the closest tool you've got. Yeah. But then everyone, every project's different. You're a different researcher. It's a different problem slightly, you know, so you can never That's absolutely right. You don't even know what you might be spending your time on, so Yeah. They're just artwork.

Yeah, yeah, yeah. I really believe that. Anyway. Yeah. Hmm. Um. Mary says that she's a neurodivergent researcher and she, so she touched on the use of AI in academia, uh, to sort of support this project management stuff. And, um, side note, we do a little bit of that, like we talk about where you can use AI as part of research process, um, you know, figuring out, , tactically where to apply it to [00:33:00] reduce some of the burden around that sort of stuff.

Mm. And she experimented with using Claude to create Gantt charts, , and found that didn't really, wasn't particularly useful. , And so really her overarching insight was that maybe Gantt chart exercises and about the outcome, but about the process of working out what you hopefully, hopefully will accomplish over the next few years.

So it's the work of, yeah, the work of the Gantt chart, thinking about the work and then how the process of thinking about how you might represent that in a Gantt chart might help clarify, I think is what she's trying to say. The scale and scope of the work that you have to do.

Well, it's like you and I say quite frequently, you know, it's the, it's not the plans so much as the planning. That's the valuable thing to be doing. Um, so you don't always plan and then execute, like rarely does do you get the pleasure of a task that's just plan and execute in life. Yeah. And yeah, that's exactly right.

I, I just wish we had a better [00:34:00] way of doing it than Gantt charts because I feel like people think they should work and then they make them and then they beat themselves up for not following them. Or they use them to argue for money and give themselves time to do something. And they're an inadequate tool for that.

Then they find themselves in trouble because they believed the Gantt chart. And people don't often enough say that Gantt charts are fictions like we are here. So anyway, you are welcome. Gantt charts are fictions largely. Yeah. Although I argued about this a lot with, um, Amy Grant when we were writing the project management for researchers book, which by the way, Jason is in the pipe, is being actually made at the moment.

Yes. Has not yet got a cover, but it has got a little like home on the web. You can order it. And I'm pleased to say that between me and chat GPT, there was one perfect copy editor because there were no notes, not a single one. I didn't, I did not make a single spelling mistake. I didn't miss a single reference.

All my capitalizations, [00:35:00] everything. Was there and correct. This has never happened before Jason in my life. Well done. Congrats chat. GPT by chatty by itself. No, no. Good. No bueno. Yeah, yeah, yeah, yeah. Me by myself. Definitely no good between us. Awesome. I, I'm gonna make an argument for, um, AI as augmented technology a little bit later today when we discuss our work problems.

So, um, yeah, I, I'm on the, I'm on board with you and I'm glad to see that, uh, right, that it actually works, so that's good. Um, yeah. Yeah, yeah. Anyway, Amy and I argued a lot about Gantt charts, so she reckons that in, in things like health research that they are important and do work. So, like, I'm not, you know, we shouldn't, we shouldn't be too quick to dismiss 'em completely outta hand, but I think for a lot of people, they don't.

So take that under advisement. I think, um, they might be useful if you are an [00:36:00] actual project manager type thing, and that is your, that is your one job, right? Like, if you are responsible for getting from point A to point B, um, because you can then move things around on your Gantt chart as, but for everybody else, yeah.

I'm not so sure that it's useful. Yeah. I mean, they used to work in architecture, they would tell you how behind you were. Yeah. Right. And the architecture has, has timelines. Like, you know, if you pour a slab, it takes, I don't know, I can't remember, 28 days to cure or something. Like it just has to sit there and, and set.

Yeah. And so those are just facts. Like they're just biology, like they're just physical facts. Yeah. So you can put them in a Gantt chart because, you know, that's, that's how long that stage takes. And if it rains it takes a bit longer. So, you know, but anyway. Yes. Um, oh, we have to get off our Gantt chart again, Mary, thank you for getting this long Gantt chart ran.

It's a, it's a very triggering thing for me. I can, I just say i's spend a lot of my [00:37:00] career agonizing about not being able to follow gate charts. Right. Here's, this is the last one we're gonna read out today. Yep. So that we've got time to actually talk about Haika. Yep. Yep. So this one's from Janet who wrote us a very long email and had lots and lots to say.

And so what you've done here is you've taken the first bit of her email, Jason. Yep. And then, um, and we might come back to some of the other points because she asked us a lot of questions that were all good questions. Thank you Janet. So we'll just bite off what we can, what we can tackle here. Yeah. So Janet writes, hi all.

Just wondering if you heard of an AI called Perplexity ai? I have. Jason, you heard of it? Haven't used it. I had a play, um mm-hmm. Because it has fans. Yeah. Um, it was recommended to me by an academic in philosophy and ethics, a friend and a lawyer who is also in the local council who's using it to evaluate the legality of council agendas Interesting.

And decisions. And they're not good compliance. I can imagine. That's a very useful task used for ai actually. [00:38:00] Um, the counselor lawyer was able to share a link to the discussion on perplexity, which was helpful. I tried testing a link to a discussion on chat t but only the last prompt was able to be shared and not the discussion.

Maybe perplexity is the, is the same. Um, she says, I think Perplexity uses Claude and chat GPT in some way. Yes, it does. So it's basically a kind of router of the different platforms with a bit more technology built on the top. That's, as I understand it, but Janet asked Google Gemini to do a compare of perplexity and Claude and chat GPT and it.

Google said that perplexity seems best at collecting references, but not so good at finding relevant information by conversation. Janet says, I haven't tried Claude yet. I'm deep in with chat GPT and have to keep reminding it to provide counterpoints and arguments, not just agree all the time. Yes. That's a constant thing with Chatty G.

Chatty G is friendliness and flattery are disconcerting to me, but good modeling 'cause I forget to do friendly. Evidence in this email. [00:39:00] Can you gimme a rough compare of ais and what they're useful for? I'm, I'm gonna sink money into some, do I try all three? Just chat. GPT, which is the most reli, which is the most reliably objective and provides proper fact checking, not just agreeableness.

So I can answer that one really quickly. None of them. Mm-hmm. None of them will do fact checking. Yeah. Um, chat GPT is probably the best because it, it's connected to the web and you can ask it to provide you, you know, show you're working or show the sources or when you're citing a paper, always give me a DOI and then at least you can go out and check where it's getting things from.

Uh, Google lm similarly, when you put stuff in there and you ask it to do an analysis, it'll pop out a little footnote thing. So when it makes a statement, you can click on the footnote and see which document it's looked inside. So they provide kind of fact checking mechanisms, I suppose. But none of them are gonna be any good because it's just a fundamental design constraint.

'cause it's just statistics. Like [00:40:00] which number? Well, I mean very, very fancy statistics, but, you know, which, how far away words from each other in an embedded space, and therefore how likely is it that a word will occur with another word and then it generates. So it's got no grounding in truth. Um, uh, but I suppose the deep research tool I've found lately on chat GPT to be pretty decent.

Um, and the agent mode can be also very helpful at sort of just going out to other sources and, and grounding what it says in, in documents that actually exist rather than just making something up. Yeah. And if I can just also speak, speak to the agreeableness, I think it's got less agreeable in chat GPT five.

Um, other people say it's got more, I, I experienced it as being less sycophantic in this version. I like it better. Um, but like friends have rung me up and gone, I hate chat. GPT, they've taken away the personality and I'm like, I don't really, I don't see that much of a difference. If anything, I see it being a little bit more curt and to the point [00:41:00] when it generates a message to the point where I've stopped thinking it.

I don't feel like I have to think. Yeah. Which is kind of closing the loop thing, but I don't know. What do you reckon? I mean, if I, I, if I had to spend money on just one, my, my advice at the moment is just, just buy a standard, um, a standard access to GPT is good enough for most people's needs and it's probably the only one you need.

Yeah. Like I, I don't know, it's, it, I think that these models, they move backwards and forwards, um, between them, like the top ranked models, you know, one will release a thing and then another one will release a thing and, you know, one's better for a short period of time and then another one's better for another period of time.

, Where I think the, so. So I, I agree with your choose one and just get on with it, , approach there. , And you're saying chat, DPT. Uh, the other thing that you could do if you chose say Claude, , is the choose the [00:42:00] right model for the kind of work that you're doing. So both Claude and Chat GPT allow you to choose different large language models in the background.

And some are good for some things and some are good for other things. So understanding what those different models do and the, the promise with chat GPT five was that it would automatically select the right model given the task that you had given it. So it was making those choices for you in the, in the background.

But if you choose the right model, then I think you're probably, that will go a long way to helping you to get good results for the kind of work that you're, that you're looking to do. , So an example of this, , just the other day I had to put together a document. It like, there was a lot of thought that went into the back, like my thinking.

It was a really complex thing that I was trying to do, and I knew that I needed the most powerful. Model to be able to handle that kind of work. And it was [00:43:00] just a different experience, right? Like it, it, it truly did understand what I was trying to do and it, and it lent into it in the way in which I expected it to.

And so we were able to get really between this, we were able to get really good result out of that. But if it'd chosen a different model, I, I probably would've got frustrated with it. , Or if, if I just went with a default model, it might've, , it might've just defaulted down to a, a smaller model because of course they're cheaper to run.

So, uh, yeah. Yeah, yeah. So, I don't know, choose one chatty Claude, but pick your model, the one that you're working with, pair it up with the task you've got. I think, I mean, my, my reason for choosing chatty and now being my work husband and leaving Claude and just occasionally having affairs on the weekend with Claude, like my, my reasoning for that is the way that, um, Chatt PT remembers you between chats, which, you know, as I said, privacy nightmare and a problem with if you've got more than one family member using it, unless of course you go out of your [00:44:00] way that he always says it's Brendan here.

And yeah. Um, is that it learn it sort of, it forms a theory of mind. That's a concept, you know, from. Like lots of cognitive research that we have a theory of mind of what another person is thinking. And so we can predict their behavior a bit from making a theory of what's going on in their mind. Theory of mind.

Mm-hmm. Right? Mm-hmm. And we have things like mirror neurons so that when we see a person in pain, we feel some, our brain fires like we're in pain, but doesn't reach a threshold of actually causing us pain. Oh man. So things like that. Yeah. Do, do you know, you know the ones in my mind sad if people cry.

Like I'm instantly sad, like my mirror neurons for just love, just go sadness. Yeah. Like I Like you cry, I cry. I can't, oh my god. Funerals. I cannot even even stop it. Oh, don't even go there. Funerals are, oh my God. I'm so, like, I walk into a funeral and I just start crying 'cause I'm like, don't even try to stop it.

Yeah. Like, [00:45:00] just like ugly cry into whatever my husband's shoulder usually I ugly cry. Yeah. Yeah. And this could be for a person that you've never met, like some like, oh, or someone, I dunno quite that. Well, someone's parent, you know, who like, I'll just cry. I could be in outer Mongolia and walk into a traditional.

Funeral in the steps of the, like whatever mountain range that is over there. And like, and I would start crying like, I don't even need to know the person. Right. It's just sad people around me, you know? Like I'm a, I'm a, I'm a complete wreck. It's actually an interesting thing to think about, right? If, if chat GPT is forming a theory of mind, but has no emotions that go with it.

Yeah. That's like the definition of being a sociopath or a psychopath, is that you're actually very good at understanding what's going on in other people's heads. You just can't feel the feeling that they feel. You can't feel empathy. Right, right. Which is a feeling in the body of someone else's feelings, so you can tune in.

So they, that is some scary shit that [00:46:00] they're designing. That's all I can say. But also it knows me so well and it just catches the end of the rope now and the more I work with it. Yeah. And that's gonna be, for me, model power is less important to me than that because it means that. The priming process that we often spend a lot of time teaching people just isn't so necessary in chat.

GPT just don't really need to do it. But you have to work with it for a while and correct it and, you know, show it examples and teach it how you wanna work. Yeah, yeah. Um, and so, you know, and that means it's really important what you actually, when I started to realize how, how much of a theory of mind it had of me, I thought that right.

We are never having a private conversation. 'cause for a while there I was taking photos of, um, my food and I was getting it to assess my insulin levels, undo calorie breakdowns. I was pretty good at that by the way. And taking pictures of menus and suggesting, which was a sugar, like low sugar versions of things and stuff.

And I've just stopped doing any of that. Yeah. And I do [00:47:00] demonstrate in the classroom, you know, I, I get it to tell them, you know about me and it does this thing and people are like, whoa. Then I'm like, tell me about my family. Uh, tell me about my eating habits. Tell me about my emotional life and about my emotional life.

It says, oh, we don't talk about that. I like Good we.

Yeah, people using it for therapy is such a bad idea. Not only that, it will probably give you a psychosis 'cause it'll agree with you. And you know, there's terrible stories around that. But also just, do you want a big conglomeration to know your deepest, darkest insecurities and fears? No. I'll tell them on the podcast.

Everyone can hear that. That's it. You have to listen. If you wanna know about that stuff. I'm not gonna, yeah. You have to get at least 20 minutes in.

Uh, that's funny. All right. I'm driving the bus, aren't I? Yeah. Oh my God. Okay. Work problems. This is a problem we had to solve since we last spoke, and in this part of the show, we focus on just one aspect [00:48:00] of work and we nerd out about it. Jason. Yeah. Um, sometimes it's about problems we've had. We haven't done one of those for a while, I feel.

Yeah. Like we haven't had, here's a problem. And it just that you've done lots of problem solving lately. So I just think sidebar producers note we should have a go at one of those soon. , Or suggest a theme, A theme suggest by listener. Sometimes we read books and decide if they're bullshit or not, but we're always practical.

We share our tips, hacks, and feel opinions, uh, their opinions and feelings at the same time. They're relevant. And this week our topic is how to think about AI in the workplace are practical and philosophical approach. Jason? Yes. Tell me about it. So, well as you were saying earlier, I had to present at the Holmes Glen Institute at their Aspire conference.

So, um, Holmes Glen are a tafe, very big tafe, um, yeah, by the way, like very, very big. So you might wanna tell our overseas listener what, what a TAFE is. They're a vocational [00:49:00] college, so they, yeah, vocational college. So TAFE stands for technical and, uh, further education. Further further education. Um, I did three years in tafe, by the way.

I was three years a TAFE teacher. Oh, really? Yeah. They work hard, man. Like Oh yeah. Their teaching loads are hard, high. Yeah. Yeah. And the documentation burden is also 25 hours a week in the classroom. It's a lot. Yeah. Yeah. That's a lots, a lot. Oh, it's massive. Yeah. Yeah. Um, yeah. And that documentation burden, you're right, is, um, is very, very different to higher ed.

Much more prescriptive. Yeah. Which, uh, uh, which I think is why AI in that space is brilliant, right? Mm-hmm. You know, all of a sudden being able to build stuff that needs to meet those requirements of, um, the regular regulator, AI really helps. That's a game changer for that stuff, I reckon. Anyway, the challenge was, um, they'd hired us Ingham, Newburn on the reg team.

I think next year you have to. They've seen [00:50:00] me for two years in a row now, so I think next year you're gonna have to come down and wow them. Something I something know. Can we just say also, Jason? Jason, you were like the first, that was your first gig on your own, wasn't it? Last year? Yeah. You back. So you rocked it last year because they wanted you back again, but your first geek, you rocked it.

Look at you. Uh, look, yeah, it was, it was the first one where I, I didn't, I didn't have to hang onto your hand as, as well.

It was good. It was good. Um, so I had to give three presentations, one on AI and professional work, like administrative professional work, one on AI and teaching and learning. And then the third one was about, what they wanted was something on like an intermediate level workshop on, uh, implementation, like strategic implementation of AI across an organization.

Um, and that was an interesting one to actually think about because when I pitched those three ideas to them, I was like, oh yeah, that wouldn't be too hard to do. You know, it would [00:51:00] be use AI in this, like in this, to solve this kind of problem and this kind of problem and this kind of problem. Um, there's plenty of use cases out there.

Like I thought that pulling that together was gonna be reasonably easy. But when I actually started to pull it together, I realized that you can't, like every organization's different. Um, and every implementation and the way in which people work are all gonna be different. And so just sort of saying to people, do it like this, do it like this, hold it like this is not actually gonna be particularly useful.

That what they actually needed was a framework, like a thinking framework for the way in which they go about deploying AI across their organization. Um, and so to that end, I was like, oh, okay. How do I come up with a framework that's gonna be useful enough for an organization like this that has very distinctive parts to the organization?

I mean, they've got all of their administrative and all of the stuff that goes along with that. They've got all of their students stuff, they've got all of their teaching and learning stuff. There's people doing research as well. Like how do you come up with a [00:52:00] framework that addresses all of those sorts of things and then discuss it and talk about it in 70 minutes?

So, um, it's challenging. Yes. Yeah. Yeah. So one of the things that's been really sitting with me, , since episode 78, Inga, and that was the one where you, , realized your ambition to become a cyborg. You remember? Yes, yes. You talked about. You talked about your, uh, research approach using AI with your research approach and how very much a backwards and forwards a back and forth thing.

Um, I have been thinking long, deeply and hard about that ever since that episode. And I, I reflected in this presentation that I gave to the homes plan that, that very much that same idea again, um, about using AI not as a, as an assistive technology, but as an augmentative technology. Hmm. So, you know, still being in charge of what the, how the [00:53:00] technology's used and, and still having oversight over that, but also recognizing that the technology now is so good that it can often lead you to a position that you may or may not have got there by yourself.

It will certainly lead you to that position much faster than, than if you had to sit down and kind of work your way through it by yourself. Mm-hmm. So as I'm starting to write this thing, I'm like, I'm gonna, uh, of course I'm gonna use Claude to help me to, to write this, um, and to put this up, I had the insight to.

Ask Claude to actually write Righto Claude. Strap in. We're gonna have a conversation and I really want you to test my thinking and I'm gonna test your thinking and we're gonna have a conversation for reels now. Mm-hmm. And, uh, and I started to then work through what I wanted to do. That's all by setup for what's about to come next, Inga.

Mm-hmm. Um, listeners you will find in our show notes a bunch [00:54:00] of links to some documents. Uh, the first one I'm gonna go through is I'm gonna flash through the slides that I used at the conference, and I'm gonna kind of walk you through the overall argument that I've made, and then I've got three documents in there.

Where I created these frameworks. So one is called AI as a Thinking Partner framework. One is AI as a verifier framework, and one is AI as an information processor framework. So you can download these documents. Um, they're just ways of thinking about how you might use AI in your work. Um, three different ways that you can use it in your work.

And then the last one is a document called How I use Claude as a ready to hand technology to help me build a structured dialogical thinking bot. So the idea here is that I, I took the whole conversation that I had with Claude, um, revealed all of the thinking that goes along in the backend of [00:55:00] Claude. So when I would say to Claude, I want you to do this thing, Claude then goes away and thinks about that and it talks to itself, right?

You've seen this Inga where it says, oh yeah, IN'S asking me to do blah, blah, blah, blah, blah. I need to go away and then do blah, blah, blah, blah, blah, blah, blah. And like you can reveal all of that. Mm-hmm. So in that document, what I've done is I've revealed all of that. So you get to see what I said, what it thought, and then its responses to that, and then my conversa my response to its response.

Do you know what I mean? This conversation? Yeah. So I'm looking at that document so that you've actually, so you're not talking about just what it, um, it came back with you at, but what it was thinking behind that. Well, yeah. So, right. What's going on in the machine? Yes. Um, how it's thinking are revealed, how it's thinking.

Yeah. Because they're fun to watch those things actually. Yeah. I mean we, we've been doing it on the small language models. I think I told you when it was doing a rubric marking exercise. Yeah. Yeah. I was like, the user has asked me to mark this thing out of a hundred that mark this [00:56:00] thing out of a hundred and so it gives itself 105 out of a hundred.

Yeah. And then, and then it said, wait, the users, hang on, I think it's gotta be under a hundred. And it goes, oh, right, okay. 97. Yeah. So when people say there, there's a marking, you know, mark using AI for marking, that's got to be bullshit. Yeah. This has got to be bullshit. Yeah. Yeah. Anyway. Yeah. Okay. So that's interesting.

So that's the documents. I'll put them in the, in the main body of the show notes. So if you look down at your phone, you should be able to see, I'll put them separately to the things we talked about list, so it's a bit easier to see them. Yeah. So just, um, just those links by way of, of kind of management over the next little bit while I talk about this.

I'm gonna start with the slides and kind of work through them pretty quickly. Um, I'll let you know when I'm gonna move to another document. Yep. I, I'm only gonna pull up one of those. Frameworks. I'm not gonna go through all three of them. Like once you've seen one, the other two are the same, but, you know, just different, uh, context.

Mm-hmm. So, um, [00:57:00] and then I'm only at the end, I'm gonna pull up the, the big document. It's 28 pages long. That conversation I had with Claude by the time you Yep. Um, by the time you expand all the stuff in it. Uh, and I'll just a, a few conversations outta that just by way of explanation about what actually happened and then what I was thinking while I was doing this thing all the time.

So this, this whole thing was this, I'm using AI to u to build something, the, and how am I using it? Because I'm using it in this way and I'm also trying to figure out how I'm using it so that I can produce this framework so that other people can think about how they use it. So that, and then how have I done that?

Like it was this kind of circular, very meta experience. Very meta. Yeah. Yeah. Very, very meta experience. Yeah. So hope so that's, hopefully that's what we get to at the end of this. So I will try and get through this originally quickly. So we'll start with the slide deck. Um, yep. On the very first slide, I just, I talk about, , using AI as [00:58:00] augmentative technology, not assistive technology.

Um, so really grounding this in the space that, , it helps and amplifies your own human ability rather than, . Rather than, yeah. Assistive technologies are often designed to support people with, say, maybe physical disabilities, like blind people. Um, use assistive technologies like Braille readers so that they can read and those sorts of things.

Right. That's a, it's a, it's a different argument. So I'm really talking about augmentation here. Mm-hmm. Um, in the next slide, we, , I start to set up the argument about whether or not AI is a thing that's here to stay. So on that next slide, it's a, a description of a study that was done a a few years ago now that talks about when you give AI to people, what is the impact in terms of their ability to be able to do work?

, Upshot is, , they can do it faster, they can do more of it, and the outcomes are better. , So that's the slide with those two peaks on it, did not use AI [00:59:00] data and used ai. Yeah. , It's getting harder and harder to tell, isn't it? 'cause people don't fess up to what they actually do use it for and Yeah. Correct. And, um, for all sorts of reasons. Yeah. Yeah. So that, that study there was done by Ethan Mulli and his crew.

, Oh, was it? Okay. Yeah. So, uh, an actual academic study, , well designed sort of study. , But, , yeah, better, A faster moral hasn't mean another one. Uh, more a Yeah, yeah, exactly. Uh, the next couple of slides, uh, are from a McKinsey report. , They released a global AI report last year, and then they've rereleased I've updated it this year.

So I've just pulled a few slides from, from that report. Really. The, the next slide is just really showing. They're nice graphs. Yeah, they're pretty, McKinsey do really nice graphs. , Yeah. Uh, that, , the revenues are increasing across the various areas of these [01:00:00] businesses, , through the use of ai. So what we're seeing is that there's a revenue bump for organizations that are implementing ai and that that revenue bump is, , distributed across various different functions inside organizations.

So the. The bottom line story here is that these graphs are saying that if you implement ai, you're gonna improve your revenue. The next one, uh, was around which functions a company's putting, , AI into in their business. , And at kind of at what sort of scale? And it won't surprise you that across sectors of technology, professional services, advanced industries, media and telecom, consumer goods and retail financial services, healthcare, pharma, medical products, energy and materials, that the space where, um, AI is being deployed the most is [01:01:00] in marketing and sales.

Yeah. 'cause it's great at that, right? Yeah. Right. Um, brilliant. And yeah. Yeah. And so that next graph there really just shows across those various different sectors, the areas and within which AI is being deployed, , across those various functions as well. So in media and telecom, for example, the highest deployment of AI is in the marketing and sales space, but the next highest one is in service operations.

Whereas in some of the others, uh, if you, if you look at healthcare, pharma, and medical products, , the highest one there is actually in it then followed by marketing. Mm-hmm. Um, and then followed by knowledge management. So this graph, what, why I put it up there was to show that AI is being deployed across multiple sectors and across multiple functions at the same time.

, The next graph there just shows the, , organizations that are [01:02:00] implementing AI in multiple functions inside their business. So it's not just a case of just putting it in one spot in their business, that they're actually deploying them in, like across their businesses. The point that I am, um, trying to make by bringing all these graphs together is that, uh, that we're AI is now going to become a tool that.

We just, we are gonna live with, right? Like, it makes more money. Um, it's, it works in lots of different areas. And so you are not going to actually be able to live a modern, contemporary life without interacting with ai. I think, um, over the next minute. So there's a lot of people who think that you can, Jason, there's a lot of people who think that you can.

Yeah, I'm just, I just saying, I mean, I'm not, I'm not taking their point of view. I'm just, I don't know how you can hold onto that point of view and actually look at any, anything and be able to say that. Like, yeah, sure, you can refuse it, but you're just gonna, you're gonna end up living the, you know, Bush cabin.

I don't know what, when you [01:03:00] look at the graphs, right, like you look at, so companies are in, companies are putting them in, in multiple parts of their business at increasing rates of like super increasing rates. They're across all functions like it, that it helps you make money.

I mean, you put those three. Things together and you're gonna see more of it. I think these graphs are actually hiding a bigger reality than you see here. I think it's actually bigger than this because what these graphs are not, these graphs are capturing management eye views.

You know, the person who's filling in the survey gets passed off the chain to the manager of wherever and they're like, oh yeah, we've got this project and we're doing this project. And not at the individual worker level where people are using it all the time for all sorts of things at the individual worker level.

And that's often called the, the shadow, shadow AI use. You know, that isn't the innovation at the local worker level that isn't captured in any of these graphs. So I actually think that the change is actually bigger than you're seeing. Yeah. Yeah. So [01:04:00] the last craft that I've got there is, um, one that we've got from Business New South Wales. Their business conditions report. It was about they went out and they asked a whole bunch of their small businesses, are they using ai and where are they using them. Um, and I just wanted to highlight in that particular graph there, the various areas where AI is actually being deployed in small business in New South Wales.

Um, and the interesting things about that graph are the way in which they've coded the data or color coded the data. So red is no, we're not interested. Um, and in, if we look in the, across the top, they are 723 respondents in information search about whether or not they're using AI for their information search.

17%. No, we are not 2%. No. We tried, but stopped. And I'll come back to that one. Mm. Um, the gray is 20, 31%. No, but we are interested in doing that. So, [01:05:00] you know, I read that as we haven't quite got there yet. Mm. And then 50% or the green in that one was 50%. Yes. And cutting staff. Right. So if you look at that, yeah, it's the yes.

And cutting staff together and cutting staff. It's, you know, this one stops people pe this stops people in their tracks in the classroom, right. Like yes and cutting staff. Yes. So if you look at the 31% in that top line there of the, no, but we're interested bit, some of them are not, are gonna try and it's not gonna work for them because they don't know how to use it yet.

Right. And so they might move to, um, no tried but stopped. But I think what will actually happen is the no tried but stop group will actually get smaller over time as they figure out how to use ai. Yeah. So, you know, it's just becoming easier to use now. Yeah. Like as you were saying before, [01:06:00] you know, we don't have to do so much priming and prompting anymore.

We don't have to actually a approach an AI in a particular way to get the best out of it. We can just talk to it now and it gives you pretty good results. So all of this together, um, I think what you end up seeing is that this is a pretty much a trend that is not going to go away. Um, and so that got me sort of thinking about then, well if AI's gonna be everywhere, humans will need to figure out how to work with it.

Right. Like you're gonna have to have some sort of rules of thumb when it comes to working with ai. And it can't be down at the specific, in the weeds detail of if you use these words, these specific words, you'll get this specific, specific answer. They need to be at the kind of mm-hmm. Principle level, if you know what I mean.

Yeah. Like, these are the principles that we use for it. So, um, the three I came up with, and this is the bit where it all starts to collapse in on itself, inception style mm-hmm. Is, um, AI as a thinking [01:07:00] partner, AI as a verifier, um, and then AI as an information processor. So what I'd done at this point is I had gone to Claude and I'm saying, how do we use effectively, how do we use AI in businesses?

, And I need a, like, I need a three, three box frame really for this. Mm-hmm. Um, so parallel to this, I'm, I'm doing this work. Right. , And so that got me thinking about how I'm, if these tools are gonna be everywhere and you can't avoid them, what has people said about this sort of thing in the past? , And so the next slide is where I start to introduce this idea of, , high Digger and the AI as an, , as an argument for ai augmented embodied working context.

So this is, you know, you recall you used the, , embodied research context in the episode. Um, yeah, 78. Yeah. So like, I'm calling back to that episode there. Yeah. When I, [01:08:00] as I'm thinking about this, it's like, yeah, it's not just research, it's also in these other areas as well. , So Higer Martin Higer, German philosopher, dead now, , he would've called AI as being, , what he would've called

ready to hand. , And so in the next slide there for the people who have forgotten their higa, I've a bit of a basic recap, ready to hand, the basic idea here is that when something is ready to hand, that it's not experienced as an object, but as part of our engaged activity in the world. So, , the tool itself that you are using kind of withdraws from your conscious attention and it becomes transparent.

So you are the way in which I think about this, you are in the moment doing a thing with the tool, but you're not thinking about the tool at, at that point. Like it's just Yeah. It's you and the tool are the, the reality, right? Um, I often use a car driving a car to explain that 'cause Yeah. You know, when you're starting to learn to drive a car, you really feel the car as a machine.

But [01:09:00] when you are fluent with driving a car, it's an extension of your body and it's, you're a car human entity, right? Navigating a world. Yeah. And you, and you're sort of running all the controls of it and having feel of it, and the feedback that you get from it, uh, they're just part of your conscious experience of driving.

You don't sort of think, but when you first get in the car, it's like, where is the, you know, gear stick? Where is the, you know? Yeah. And then it becomes ready to hand, it becomes, it sort of falls away. Yeah. Yeah. That's a really interesting idea actually, because, um, my friend , who's very upset about chat GPTs personality, changing to the point where he had to ring me up and rant to me about it.

Um, he's a big user of chat GPTs, um, verbal input. So he talks to it. Yes. He doesn't type and he's, and he, I think he experiences this ready to hand. I, I definitely experience AI this way now 'cause I use it so much, but I think he even, he even more does that, um, because he's not even really like, forming words into a [01:10:00] keyboard.

So he, yeah, he really, it really like interrupted his flow state I think when it changed. Um, yeah. Because it's so much a part of him now. Yeah. Yeah. The, the example that I use on this slide is like typing. You know, when you've, and you, when you're typing an email, you're not thinking about the keyboard, like you're thinking about the message that you're sending.

Um, yeah. And so the keyboard, the keyboard actually disappears from your consciousness, but it's still part of what it's, so the same way that you're talking about a car, um, the keyboard becomes ready to hand, right? Like, yeah. So. So the, the thing to that kind of like made me stop and think a bit about this was about this idea of subject object relationship, right?

Between humans and tools. And so, you know, it becomes a little bit blurred there when you, when you go down this ready to hand part. The other thing, um, that Higer talks about is, um, the, these ready to hand [01:11:00] tools or items exist in what he calls an equipment totality. Equipment totality. So the example here is like a hammer refers to nails, which refers to wood, which refers to building a house, which refers to a dwelling.

So tools, the hammer gets its meaning from the web of relationships and purposes, not from just the physical properties of wooden handle, metal, head hammer, hammers. Do you know what I mean? It's the, in the use of the hammer that the, um, that the hammer gets its meaning. And if you take your hammer and you use it in different places, you might get a slightly different meaning, right?

But, but it's a different equipment totality that you're dealing with. Yeah, if you think about that, you can sort of think about what's the different, you think about a hammer quite differently when you think about a house than you think about it in a murder with a hammer.

Yeah. Or cooking with a hammer. Like if you're gonna crack an [01:12:00] egg, right? Like you're very much thinking, you know, your tool is not ready to hand then, I must admit. But, but yeah. Yeah. Exactly. Exactly. Yeah. The context does, the context really, really does matter. Um, so I'm making, here, at this point, I'm making the argument that AI has entered into our equipment context.

Um, you know, leveraging off those graphs that we, that we saw earlier. Um, and so I think that we're soon gonna get to a point if we're not already there. I think Inga, you, me, are already there, that um, it's already part of our equipment totality. So, so the challenge I think then becomes.

Um, how do we move towards a future where AI tools are ready to hand, like they're everywhere? Um, and we are using them and the more we use them, the less obvious they become to us in their use. But we still need to remain critically aware of our equipment context and our equipment totality, right? So we, [01:13:00] we still need to be thinking about these things as they fade from our consciousness.

It's a tricky situation. Mm. So, at the moment, um, on this slide here, we're still framing ai, I think in terms of a subject object relationship, like, if you remember, it's, it's only three years old since roughly, since I like chat GP t launched, and very much a lot of the, and I, I kind of rely a little bit on our, , on the workshops that we, that we, that we give, that people pay us for is that I, people often still think of AI as a thing, you know, as a thing separate to the human.

And that, you know, how do I do it and how do I prime it and how do I prompt it? And which one of these things over there is, , is better than the other thing. Like, we get those kinds of questions a lot, and that still reminds me of people thinking about this as a separate tool that they haven't [01:14:00] quite integrated into their thinking approach yet.

And so very much they're just picking up the hammer for the first time sort of thing. Uh, or they're getting in or they're refusing to pick up the hammer at all. Yeah. I mean, no, this is a, it's a serious issue. 'cause I was talking at, in, in PhD induction, just to digress for a tiny bit Yeah. About, you know, um, about the divide, which Jonathan, I don't know, my latest blog post that I published finally, for the first time in four months, I published a blog post.

I'm pleased to report. Yeah. I wrote the whole thing myself with no AI help whatsoever. It was good playing the piano. Yeah. Um, and, uh, you know, Jonathan and I had lunch, um, we had a lovely lunch, and he said, Ingrid, it's a bit like the Catholics and the Protestants. It's become an article of faith Yeah. In how, and people are, have big feelings about it.

Yeah. And some people have told me that they won't examine a thesis where a student has to, has said that they'll use ai. Yeah. And I'm less worried about that than the supervisor who doesn't take a conscientious stand and just judges the person negatively. Yeah. Because they [01:15:00] themselves don't use ai. They don't believe in it as a technology.

Um, and then they increasingly, they'll get, every thesis will have some element just for editing even. Yeah. And then they'll, they'll, if they wanna keep examining, they'll have to examine it and then they'll be judging people. Yeah. And then the ones that like, I wanna do my PhD, completely free range artisanal, handcrafted.

Um, so they may do that. Mm-hmm. They're gonna be able to, I mean, a less augmented version of themselves, right? Mm-hmm. So is it, it's gonna be different, at least at maybe worse than it would've been if, you know, so therefore, like, but there's, there's maybe examiners out there who just won't even look at it.

Like, people have said this to me, I will not look at it. Yeah. And so saying that to new PhD students, they're all freaking the fuck out. Yeah. Of course. Because they're like, I'm three years out. Yeah. How do I know that? And I'm like, well, I should tell you this now because this might be the, you know, something you're gonna have to think about.

And for me as a supervisor, [01:16:00] when I go to get an examiner for a PhD now, which I'm about to do for one of my students next year, yeah. It'll be the first conversation I have with that person. Yeah. And I'll be crossing them off. Yeah. You know, which is ridiculous. Yeah. Actually. Yeah. Yeah. Like ridiculous. Yeah.

I'm like, I, I agree. Like people are, people are being, are being ridiculously worked up about it. Not that they're wrong, that there's problems, but to to, to take it to that extent. Yeah. Like, I'd rather you tell me. Yeah. You know, I'd rather you be a conscientious, I, I can respect that as a position. 'cause at least you're being clear.

Yeah. Yeah. At the, I guess the point I'm, and I'll hold, I'll hold that point that you're making there. I'll just hold that in space for the moment because I think the, as we go through just the rest of this, we'll come back to it again. Right. But the idea of [01:17:00] being clear about how you're using AI is really gonna be, is really going to be important.

Yeah. So, so that you can articulate where it has augmented you as a human and not be ashamed of that. Because you are, yeah. What you've done is that you've consciously thought about that augmentation and so the tool becomes ready to hand, but at the same time you have to be removed enough from it so that you are constantly critiquing what's going on in that process.

So there's a, which is a reflexive process that most researchers have to do anyway, right? Yeah. Like, you know, social science or in science, you've gotta think about how you're thinking, think about how you're doing a thing, think about your yourself as, what am I in this? So like it is something we're used to doing already actually.

Yeah. We are. Yes, but I Exactly Right. Exactly. Because we got trained in doing that. Exactly right. So the point I, [01:18:00] I think what I'm trying to make here is for people who may not have that, that level of training around reflexivity mm-hmm. Um, how do you go about deploying AI in a meaningful way across your organization and still do it in such a way that you can feel pretty good about it, you know, like about the way in which it's being used.

Yeah. And then you're gonna have to train your people, right? Sorry, you can't just deploy this stuff and say, have at it. You're gonna have to train them in thinking about this stuff through. Mm-hmm. Anyway, next slide. I make an argument that you should treat AI as a consultant, not as a collaborator. And the, the main point in the, in all of those words is that, um, with a consultant relationship, what you do is you ask a consultant to do a thing, the consultant goes away and does the thing.

Then you come back and you assess what the consultant did, and then you pay the consultant, um, for that thing that they did, but at all. And then you decide what you're gonna do with the thing that the consultant gave you, right? Like, so you're in [01:19:00] charge at the start of the process. You let them go away and do their thing.

But when it comes back to you, you, you look at it very, very carefully to make, and then you make your assessments about the quality of the work of the consultant, um, rather than this idea of AI as collaborator, um, where. Partly if you start thinking about AI as collaborator, you lose control. I think a little bit of the conversation.

Um, and it becomes harder for you to determine where you as a human are in charge or where the machine is in charge, where you've kind of handed over responsibility to the algorithm. So I like still thinking about this as a AI's consultant where I get to review, so this thinking technique. Um, and this work builds up, you know, how we, we came up with our framework, uh, structured dialogical inquiry [01:20:00] for using AI in research, for research frameworks and that sort of stuff, which is the favorite bot deployed in the classroom.

By the way, people love that bot. Do they? That's great. We made a g we made a GPT of that. Yeah. And um, and I've, I use it all the time in the classroom. I'll give people the link and then they go and it prompts them to, through your structured way of thinking about an idea. Yeah. Yeah. People just love it.

Yeah, well, we'll give you the link to structured dialogical inquiry bot, so you can play with it. It's quite fun. I, I took that idea and essentially asked Claude, here's the idea of structured in, um, dialogical inquiry, uh, for researchers. I want, it strikes me that this approach could be useful in the business context, but it's not gonna be the same.

Mm. So we need to come up with something similar. Mm. So the, the aim here really with this technique is to expand your thinking while maintaining ownership. Uh, get the [01:21:00] AI to challenge your thinking, um, to refine your thinking, to build and you into, into a, a conversation, a discussion about that with that approach.

, And that you then come back and that you make some assessment at the other end. So the way in which I frame this is, , in three different patterns. So the, the first one is AI as thinking partner. So where you lead, you have an idea or a thought, um, you state your current approach or strategy, and then you ask the AI to produce counter arguments.

Um, or alternative perspectives. So you start with something, uh, you've asked the AI to come back to something else, um, and then you critically evaluate these and you strengthen your original, um, position by doing that. So, human ai, human mm. The second one was, um, as verifier AI is verifier. [01:22:00] So you submit a plan or a policy or something that you're developing, and then you ask the AI to identify potential weaknesses or compliance issues, and then you use the feedback to strengthen your approach.

So you think business processes, this is really good one for business processes. If you build a policy and then you say you run it through the AI and say, does this policy meet the legal requirements of legis, like various legislation or something like that. So the AI is verifying your work. Um, and then the last one is as, um, I, I can't, uh, can't remember what I called it now.

Information processor. Was it information processor? Correct. So you provide the raw material. So this is like a data dump of some kind or another. , And the AI then structures that data, puts it into categories, , analyzes the content, that sort of stuff. And then you review and [01:23:00] verify the processed information.

So what actually happens is that when you are working with ai, often you're using these three things like interchangeably in the same chat, right? So you will be using one of these kinds of thinking patterns at any particular time, and then you might jump between all of them in one conversation. Um, and so what I've done there, I've, I've linked out to three, uh, to those three frameworks and I, and I got Claude to, to write these frameworks up.

, I might just look at the Thinking Partner framework document. So listeners you might wanna go to that can, can we do, can we, can we do the information processor? Just because I've done thinking partner so much and I'm keen to learn about information processor. Oh, okay. Yep, sure. Yeah, yeah. Information, information processor framework.

Just 'cause I, you told the, 'cause you developed the structured dialog inquiry like a couple months ago now, and I've been using it a lot and I think it's really handy [01:24:00] and I'm really interested just throwing my eye over this information process of framework and thinking how I can bring, oh, just doing a, a team meeting here, folks talk amongst yourselves.

Just thinking how we have to bring, we're both been sort of branching out on sort of parallel tracks in some ways and there's a lot of crossovers, but we, we, you and I just approach things slightly different. You're much more systematic than me. Okay. Like, and so yeah, I think this information processing one.

Listen up, people who are doing any sort of spreadsheet crunching, um, text coding or anything. 'cause I think this is the shears. Yeah. So the, I, the basic, under the basic principle here is, you know, human, ai, human, so you supply the data, AI then organizes that data into categories of some kind or another, and then you review.

Mm-hmm. Mm-hmm. Um. And so the idea here is to take, really, is to take messy stuff and turn it into ordered stuff of some way, shape, or form. And [01:25:00] you, we do that a lot, right? Like, just as humans walking around in the world, we get lots of inputs, we get lots of data that we then have to, in our heads, categorize, label, sift, sort so that we can then make decisions about stuff.

I 100% agree with this, and the more that I work with ai, I, this is, it's interesting how we think in parallel tracks, but we're often thinking the same thing. Yeah. Which is the, like, I think what I'm, I'm thinking about what I'm really teaching people is the importance of structuring your data, right? Yeah.

Whatever it is. Yeah. And, and what you've got here is, and I realize how much AI just lifts the burden in creating that structure so that something can make more sense or be used for something else. Like Right. That's what it, I use it for like a shit ton. Yeah. Is just organizing it in such a way that it can do something different with it or something interesting with it.

So, yeah. Yeah. And so like, I think right now is a really critical time to think about it, to stop and pause for a second and say, you're not handing over control here. Like you are saying, [01:26:00] Hey org hey ai, organize this shit for me so that I can see something. You know, and you talk about the golden retriever, um, approach, right?

With, with data. Like you throw the ball, the golden retriever goes and gets and brings it back. And then you kind of, you do that a few times and then you, you eventually get to somewhere useful and interesting. Right. Um. Importantly though, you wouldn't let the golden Retriever write your research report for you, correct?

That's my point. Correct. Right. Yeah. So you still need to be, I mean, it's lovely, it's very delightful, but no, yeah. You, so you're still in control. Yeah. And that, that's the important thing here, right? So, um, you might think about it like this. So step one would be you prepare your raw materials, whatever they are.

So they might be survey responses or feedback data or maybe, , policy documents or procedures or public domain content, whatever requires, , analysis. , You wanna check that it's got no, nothing confidential in there, right? Personal sensitive, remember third party Yep. [01:27:00] Stuff. We still have privacy laws and those sorts of things.

Um, yeah. And this is stuff you can work out with your ethics, , approval, which is what I've done, which is sort of agreed to certain redacting certain things and Yep. Before I put it in there. Yep. Yep, yep, yep. , Your next step is to define your processing requirements. So this is where you might write a paragraph that really describes what you have and exactly how you want it processed.

So you're giving the, AI instructions. And so this is a little bit of priming work that we talk about. So be specific about the output format, level of detail, any particular focus areas, that sort of stuff. Hmm. Um, the next step is to engage. Do you wanna read out the example there?

'cause I think people. Okay. Yep. Who aren't, who are listening in the car, might, might wanna hear how that might be phrased. Yeah. Okay. Yeah, so the example that I've got here is I have 200 responses from a public consultation about community facility preferences. The responses are currently in free text [01:28:00] format and I need them categorized by facility type, priority level, and geographic area.

I want a summary that shows the most requested facilities and identifies any patterns in preferences by location. The output should be in a format suitable for presenting to the planning committee. Hmm. So, um, you know, being pretty specific about what it is that you want from, you want the machine to do and that you want the format that it comes out in.

Mm-hmm. Um, so step three is then you kind of engage in this process of systematic processing. So there's like a three step prompting structure here. Uh, step one is present your material in processing requirements. So the prompt might go something like I have, and then you describe what your material is that needs to be specific processing task.

Please detailed instructions for how you want to organize, analyze, structure, the output as specify format. Mm-hmm. Um, the next step is request clarification on the methodology. So can you [01:29:00] explain, so this is when it gives, when it gives you something back, you request it to explain itself. Yes. Right? Yes.

Yeah. Can you explain how you've categorized or analyzed this information? Because you have to check as the human, right? Yeah. Yeah. Like, you are still in charge of this. And so how did you do that? What criteria did you use to make these distinctions? So you are, you are just trusting what the AI is doing.

It comes back and it says, oh, it did it like this. And then what if it lied to you about how it did it? That's very, very possible. Yeah. Uh, you, you would have to, you know, this is where you try not to use AI for critical things that you are not, you know, a good enough expert in to be able to identify, uh, errors when it comes back.

I think that's exactly right. Like, so if you can say, how have you categorized this information? And then you can go, all right, what it's tried to do is x. Yeah. And then you go through and you try and do a bit of that yourself and you can verify it or run your eye over it. Yeah. Um, but if you don't have the capacity, yeah, let's do that.

Like, I've got some That's risky. It [01:30:00] is risky. 'cause we've got like a, a neurodivergent traits, um, uh, inventory work traits. Yep. And, um, and we've got a whole lot of data that came out of that about what people, um, how people responded to questions about, say how much they feel they belong. Right? Yep. Um, and so we, we've figured out that there's, there's not really a correlation between.

Whether you're neurodivergent diagnosed or not diagnosed or whatever. Where there is a correlation it seems is between what kind of traits and what kind of responses. So people who, you know, have trouble understanding social situations, for example, um, often feel, feel like they don't belong as much at work.

You know, that's a pretty obvious one, but there's some less obvious ones as well. So what we're trying to do is develop an alternative way of describing Neurodivergence that isn't about, um, a diagnosis per se. It's about what kind of problems or strengths do you have at work and how they predict other things.

There [01:31:00] seems, it seems that that's got predictive force, but it requires something called structural equation modeling. I don't know about you, Jason, but that gives me the heebie-jeebies math, because that's, that's real math. Um, yeah. But this kind of thing, I, I've been playing around chat GPT and I think chat GP can do, can do it, but I can't verify it.

Yeah. Like it says, oh yeah, I found blah, blah, blah. And I'm like, well, that's pretty cool, but I can't possibly put that in a paper. Thank you. Golden retriever. Yes. Pretty thing that you brought back. Is it a ball or is it a sock that you found in the, you know, or a dead bird? I can't tell. You know what you've brought back to me.

So, so you still have to be able to do, if it's a very complex task, you definitely need expertise. I mean, you could go ahead and publish that if you want, but you've only got one reputation to lose, right? Right. So risk equals threat times vulnerability, right? Mm-hmm. And in that particular case, the threat is that the algorithm returns something that looks okay, but you can't verify it.

And so you are highly vulnerable at that point [01:32:00] because you can't verify it. So one of the things that you need to do is make sure that, that you have a way to be able to verify that stuff. So I'll talk about quality assurance shortly. Um, but the, the third step there really, after you've requested that clarification, um, is verify completeness and accuracy.

And so you might ask it something along the lines of, have you captured all the key themes, data points from my original material? Are there any important elements that might have been missed or misrepresented? Now, I don't know about you Inga, but sometimes when I push back on Claude, um, with that kind of verification process, at the end, Claude comes back and goes, oh no, you're absolutely right, Jason, I've, you know, carefully reanalyzed this piece of work that I produced for you before.

And indeed I have made mistakes, right? And I'm like, and every time it does that, I'm like, yeah, yeah, yeah, fucking why didn't you just do it right the first time sort of thing. But that step of verifying is, yeah, I know, but like, why do we. [01:33:00] Yeah. Why do we expect that? It's gonna be like, this sort of gets me, gets me.

So I find so amusing slash annoying is why do we expect it to be better than us? Honestly, it was trained on us. Yeah, yeah, yeah. Like we lie. Yeah. We are inaccurate. Yeah. Yeah. We get bored. So, you know, of course it does. Yeah. But what's amazing is when it's, when it says, no, I've got it all. Yeah. Like, and you're like, really?

Can I trust that either? Yeah, yeah, yeah. But when I was doing the book, which I mentioned before, and I was like, and I kept saying to it, have you got it all? It's like, yes, it's all correct. And I, and I still, I still send it off to the editor going, well, I guess we'll find out. And um, yes, that's it. It was actually correct.

Well, it was correct. I just never believe it. I never believe it. I never believe it. I never believe it. Trust me. Verify. Yeah. Trust me, it was, it was good enough for that particular editor, right? Like, well, sure. Yeah. Yeah. Maybe they [01:34:00] just didn't find it either. Yeah. I mean, you know. Yeah. Yeah. Um, the last step here really is around quality assurance and review.

So, um, you throw your eyes across it and does the output accurately represent your source material? Other categories, themes, or analysis? Are they logical and useful? Has any important information been a missed or mischaracterized? That's hard to know. Um, mm-hmm. And depends on how much time you've got. So, you know, I was doing a benchmarking exercise using agent mode and a report the other day.

Yeah. Because what are all other universities doing about research and development? I said, I want you to visit all 38 universities. Here's all of their names. I want you to go find them. You probably find that on the graduate, um, school or equivalent page, but you might find it somewhere else. And watching at work, it would go off to the graduate school page and go, I can't find anything there.

I'm going to the library. I'm not, you know, and it ticked away for about 18 minutes Yeah. And brought back this spreadsheet and I had to then click on every link on that spreadsheet to see Yeah. That it had in fact [01:35:00] found that. And it had, but yeah. You know, that's, that's I God's pretty tempted not to click on every link.

Yeah. Because after the first 10 or 15 I was like, well, am I gonna find anything wrong? Yeah. Can I be bothered? Right. Yeah. Yeah. Like verifying completeness and accuracy is gonna be more of our future. Yeah. You know, it's not the sort of work I like to do. No, it really makes you think it's AI doing the fun bit and I'm doing the boring bit.

Yeah. But you can make the boring bit mean bit fun by playing decent music and having coffee, right? Sure. That's true. That's true. Um, so that's really, they're the steps, right? The rest of that document there just, uh, some stuff around some guardrails that you might want to think about. Um, you know, don't put confidential personal sensitive information in that sort of stuff.

Um. And, you know, breaking like some tips about how to go about doing that. It's gonna be a great exercise. I'm just gonna plop it straight in. Um, uh, harnessing [01:36:00] AI in your research process. Jason? I'm just taking it straight in.

Good. Yeah. Yeah. Great. Yeah, so, so I, I built three of these. So this is the information processor one, the thinking partner framework does the same sort of thing, just different topic. And then the last one is AI is verifier. So, um, listeners will be able to just click on all of those and, and have a look at them and, and, um, take what they want.

Uh, just production note ina those links in that document there are not shared to the public at the moment. They're only shared between you and me, so don't Oh, okay. Just noted. Don't just take, don't just take those ones. Don't just cut and paste them. No, we'll need to. Okay. We'll need to change, change preferences there.

Alright. Adjust. Adjust preferences. Yeah. Um, I'm just putting my producer's hat on and saying that was awesome and I learnt, I learned some stuff right from that Jason, but I'm also looking at the time, which is 1 45. Yes. And they, there can be Snippety Doda [01:37:00] earlier on 'cause we kind of did wander off track here and there.

I'm sure I can find us 10 minutes, but, um, but we probably should close this out. I really wanna look at that last diagram, that last document, because I don't think that you can get there without actually that piece, which is the Claude, where you've revealed what Claude is thinking.

. So, um, I'm just gonna start at the start. I'll read the very first one and then I'll jump down a little bit further. So I started with saying like, this is my actual prompt, Claude, tonight I'm designing a workshop.

Here's the description. Then I gave a description of the workshop. Um, and then I've gone on and said, I've attached a framework we use for researchers, but it strikes me that it could also be used for executives and other people who just want a way to work with AI that helps 'em maintain control over their outputs.

Can you read the document and then have a conversation with me, A real conversation where we test each other's thinking, I want to see if we can create another artifact similar to this one that can help [01:38:00] people think about how they use ai. So then Claude in the background has read that and has gone away and is thinking to himself 'cause, and this is in purple, the user has shared a PDF document that outlines a pro practice activity.

Structured dialogical inquiry designed to help researchers develop skills in using AI as a thinking partner while maintaining intellectual ownership. Claude then goes on to sort of say things like, let me read through this document carefully and understand its structure. It then reports back on what that structure is in that document.

Then goes on to say things like, the user wants to have a real conversation in inverted commas, where we quote, test each other's thinking and quote to see if we can create similar artifact for broader professional use just beyond researchers. This is an interesting challenge. Current framework is quite research focused, but the underlying principles of maintaining intellectual ownership while leveraging AI as a thinking partner are broadly applicable.

Let me engage in the kind of structured dialogue they're describing. I should start by presenting my initial position on how this could be adapted. [01:39:00] Then invite them to challenge my thinking. Right? So that's what's going on in the background. That's the the thinking process of Claude. Claude comes back basically says, yep, that's really interesting.

Um, here's something I want you to think about. Um, and gives me the sentence, it says, in my experience, many business professionals are quite happy to delegate cognitive work to ai. If it saves time and produces good results, they might not share. So researchers concerns about intellectual ownership. So what, what the machine's done here is gone back to that original document that was all framed around intellectual ownership, that sort of stuff.

Mm. And it said, what's your view on this? Do you think the desire for intellectual control is universal? Or is this framework potentially solving a problem that only exists in academic context? An important question to ask. And one of the first questions you'd ask if you were just having a human conversation about it.

Right. And so, I mean that, I can imagine you and I having a coffee and me, me saying, not as elegantly as Claude, I'd say, yeah, but do you really think they give a fuck? Yeah, that's what [01:40:00] I would say. But you know, Claude's an nicer. Yeah. So then I go, I, I push back against Claude. Right. And I'm saying there is a very strong concern with business folk that the outcome is corrected and verified.

There are many laws that could be broken if delegation to AI was left unchecked. This particular crowd all work in the TAFE sector in Victoria, so there's some overlap and degree of intellectual rigor. So can you see what's happening here, Inga, like what's happened is I've put in this thing, I've given it, uh, this document to review, told it to come back.

Like, come at me, bro. And then I'm like, I'm into a conversation with it. It's said something. And I'm like, no, no, no, I think you're wrong because of these reasons. And I, at that point, Inga. And I'm two prompts in at this point. I, it has entered my, uh, equipment totality. Yeah. Like I'm there. Yeah. It has, yeah.

I'm in that conversation and I am [01:41:00] not thinking about the ai. I am not thinking, I'm only thinking about the point that I'm trying to make. Um, there are 28 pages of this conversation that goes on where I'm pushing back, it's pushing back, I'm pushing back, it's pushing back. And I, not once was I pushing back, even though I was watching the thinking going on, not once was I thinking about the machine's thinking I was totally focused on the problem of what I wanted to get out at the other end of this thing.

And like, I, you know, I'm here to say Claude pushed me to think in new ways and to really test my thinking around this sort of stuff before I got the answer, um, at the end. So, um, I've left that document there for people to have a look at so they can actually see the conversation that I've had. And you can actually see that what I've done in that document is that I've used the AI as a thinking partner.

As part of that process, and you [01:42:00] can actually see how I've gone about doing that sort of thing. All of that took about 30 minutes, I suppose, all that 28 pages of documentation and stuff, including the production of the documents at the other end. My, my, my point is that, um, we have been using computers now for a long time, right?

Mm-hmm. We're used to working with computers. Mm-hmm. Um, this is an extension of that, right. The AI is an extension of that kind of thing now. And now we need to learn how to work with the AI and being aware of the fact that we're going to shift from this, this position where most people, I think still are at, which is subject object, like the split there that we're, we're for some of us, and in some use cases, we're gonna get to the point where AI is just gonna be ready to hand and part of our equipment totality, and we need to think carefully about that and still have frameworks within.

With which we can reflexive, reflexively, [01:43:00] step back out, check, and then jump back into again. You Yeah. Can I, I, I could I, I read this and I think of this conversation. This is like freakishly awesome what you've done here, by the way. Like Okay. No doubt. And I read this and I think about, and I'm much more online than you.

Okay. Yeah. And also hang out with academics much more on the reg than you do. Yeah, yeah, yeah. And so I'm much more into this Catholic, protestant divide, articles of faith, um, strong, strong feelings. Big feelings about it. Yeah. But of course, when academics have feelings, they intellectualize them, which I love them for, and I do it myself.

And I would say, you know, there are people, many, many people online, um, people who I deeply respect in lots of ways, who would say, Jason is fully living a delusion. Look at Jason here talking to something that he thinks is useful. No, they'd say This, this person is demented, this person is, has [01:44:00] gone. So this person needs rescuing.

This, this is the level that people come at these kind of discussions about ai. It's like we live in a different reality because I look at this and I can see you working. I'm like, yep, that's how I work. Yep. That's how I, that's the push and pull that I get through and that, and that's where, you know, it, I think elevates my work and challenges my thinking and stops me from making mistakes and just makes it more fun.

Like it's more push and pull and, and it's more playful, right? Yeah. But a person, they would honestly think you were delusional and they'd feel sorry for you. I'm just putting that out there because I know you don't really engage in these thank goodness for you and your mental health because Yeah. I sometimes feel like I sit in the split screen reality and I, I sit on this one side of it where I go, but I do this all the time.

And it's not like these people are describing. Yeah, right. It's like I look at, again, I come back to the Republican party thing and I don't wanna, I'm not gonna [01:45:00] mention that fucking guy's name on our pod 'cause he doesn't deserve my attention. 'cause as Taylor Swift points out, your attention is a luxury item and not everyone can afford it.

And, That fucking guy cannot afford my attention. But I think about the kind of partisan and the negative polarization that goes on there, and that people like literally interpret other people's behavior as, as completely different from the way that they experience their behavior.

And then try and tell that person about their own reality. The like reality tunnels that people occupy. But in this case of the AI kind of not AI divide, I'm like, yeah, but have you tried it though? Yeah. Like before you come and tell me that it's complete crap and can't do anything. And it's a simulation of thinking and it's all snake oil.

Like I'm not saying that it doesn't have a lot of marketing hype and ridiculousness around it, but in the hands of a skilled operator like yourself, and going back to the pilot analogy, you know, autopilot in the hands of someone who's a skilled pilot, it's a fucking awesome thing. [01:46:00] Autopilot in the hands of person who can't fly a plane is very dangerous.

Yeah. So then they've got a point in that. Yeah. But, and this is what you are trying to do here is carefully construct a structure around which can be taught and replicated to Yes. Um, and, and I think, you know, that equipment totality is a very, very important, um, aspect to talk about because as soon as you enter that like that, the maintaining that critical reflexive awareness becomes more and more and more difficult.

'cause you just wanna bang the nail with the hammer. Yeah. Right? Yeah. And, but the more skilled you are and the more you know about the field that you are working in, the better you can bang the nail down. But there's some people that just pointing at it, going, you're imagining the hammer. You're imagining the hammer.

Yeah. People, I, again, I emphasize like and respect. Mm. Have enormous affection for some of these people. Think these people are actually some of the best thinkers I know. And on this I just [01:47:00] can't, like there's just a disconnect. Yeah. And it's all, and it's all big feelings. Yeah. Because I've got big feelings too.

I've got big feelings of this is actually fucking awesome. You know, like that's my big feeling. Yeah. It was. I dunno, it was an interesting, it was an interesting challenge, right? Like putting this session together, um, because really what I ended up doing was showing my working. Do you know what I mean?

Yeah. And I don't, and I don't think that's what we do a lot. That that that's our job though, a lot, you know, is to show people. Yeah. But I don't, don't, I don't think that we see this a lot. Like, or I'm not seeing a, a lot in the conversations that I'm in. I'm seeing this. Oh, you barely see it ever. Jason, I get told this all the time.

People say to me all the time, every session I run, uh, I get massive compliments, which is lovely. And people just saying, thank you for being honest. Thank you for actually showing me. Thank you for really profoundly taking it on and seriously giving it a [01:48:00] crack. Yeah. Like all I've been hearing is don't use it at all.

It's terrible, it's dangerous. Or waving your hand at it. Go use it. Maybe it'll work for that, but not showing me how. Yeah. And when I was at Newcastle, it was quite interesting to sit in a room full of really smart people. You know, like that's the great thing about our job, right? Like, yeah, it just ruins of smart people.

It's delightful. Who, so a lot of whom were feeling very nervous and not able to do this. And that surprised me a bit. Considering how capable they were as people. Right? Yeah. It's like, and it's the first time I'd sort of, for sat in a room for a long time, for a like hours with people. I'm trying to struggle to use the computer program.

It really took me back to teaching AutoCAD in 1999, right? Yeah. Um, where you'd have to go click this button No, back the other button. No, not that button. Yeah. And a lot of these, I don't think it's coincidence a lot of these people were, were women who I think of often women have often been raised to think that they're technologically incompetent, particularly like, you know, this is a thing.

Mm-hmm. Um, [01:49:00] it's a cult bit of a cultural thing. And I think the slowly, what was delightful about that whole day was this slow realization amongst these people that what skill they could bring to AI was their deep, deep, great conversational skills. Yeah. And one of them turned to me and said, oh, you may not just talk to it.

I'm like, yeah, you've got, I've been having a conversation with you for 20 minutes. You're a great conversationalist. You, that's a transferable skill. Like, just talk to it. Yeah. And, and so, you know, there, there is this that I'm just fine. I would've thought by now our business, Jason, that we would've gone back, we would've stopped teaching all this AI stuff.

I really thought this, I thought, ah, there's three months in this. Yeah. You know, uh, we'll make some money. We'll get the business bootstrapped and then we can go on to teaching the other things. Because a lot of other things we can teach, right. So I'm not worried about us long term when we eventually lose this thread of our, our business.

Yeah. I'm just surprised how long it's taking. Yeah. Um, like a really long time [01:50:00] and I've been through a lot of technological changes in my life and I've taught technology since the nineties basically. Yeah. Um, and what is it about this technology that's so different? And I think you've really nailed it here because of this idea of the ready to hand.

It is like driving a car. Yeah. You know, and how do you learn to drive a car? You can't learn from a book. No. You have to get in a car and you have to drive it. Yeah. You know? Yeah. And, and so it's just really, I just think, you know, and so therefore then when I get these critiques of people saying, listen, look at what Jason's doing here.

Jason is delusional. Like they would say that. Yeah, yeah, yeah. They would say that. I'm like, yeah, but, but have you tried to drive Jason's car? No, you haven't. Yeah. Anyway, but just thoughts. Yeah, yeah, yeah. Thoughts, feelings, reactions. Oh yeah. The opinions. Yeah. I, these, it's the next kind of, it's the next boundary, isn't it really?

I [01:51:00] mean, a lot of the stuff that you see is use these prompts to do whatever right. Likes up your life for, in earn 10 times your, you know, use this specific prompt to do this thing. But that's, that's not where the power of these things really sit. Mm. And you, you have to be a skilled user of them. And I think we're, you're a skilled conversationalist. You can be a skilled user of ai. I'd also say the other skills that I, I've been thinking about this a lot. Like what is it that makes you, and I, I think I can claim this, we're both really good at it, right? Yeah, yeah. Like what is it that you and I bring to that, that.

A lot of people find more difficult. I think lots and almost everyone can be really good at it and they'll be good at it in different ways if they want to. Like I, I believe that as an educator and as a human. Yeah. But you and I just have a naturalness with it. Right. And always have. Yeah. And um, and it's funny 'cause people can listen to our evolution of that over the pod 'cause it appeared after we starve this spot and, you know, like Yeah.

Yeah. You know, so we've been talking about it for years now. Right. But I mean, I learn as a designer, right. [01:52:00] To be iterative, to not expect everything to be perfect. To make it again, make it again. Make it again. Refine, refine, tinker, tinker, refine, refine. Right. Yeah. And be comfortable with uncertainty. That's what I learned as a designer.

And I think you've got, you, you think quite differently to me. And you, you are very good at thinking in systems and processes like the way you use text expander. You can just sort of, you are good at that meta level reflective thinking and that's part of it too. Thinking about how thinking a willingness to sit with ambiguity from my side.

Uh, thinking about your thinking, like we can all bring different ways into this. Mm-hmm. I guess my worry is just the, my worry in academia is just the big feelings. I know I keep coming back to it, but it's the world I live in and it's really occupies a lot of my time. I stand in front of classes and I get a lot of hate.

Yeah. You know, at work, not, not when we're called out to do other things. 'cause then pe people are willingly hiring us to do that. Right. But, um, yeah. Yeah. [01:53:00] But at, at home, yeah. It can be a bit tough. Yeah. I Maybe they need a framework to work with the ai, that's all I'm saying. Right. Maybe they do. They I'm happy talk.

You've given them one happy to talk at length about it and charge a charge accordingly. Alright. You better move us on. Yeah, absolutely. All right, we're gonna move on. We're gonna move on. But thank you. That was good. I think really good. I think people will find those really helpful and I'll snippety do, duh.

Uh, have you been reading anything? 'cause it looks like you've been busy. Uh, I have been busy, but I finished Black Hawk Down, , which was, , the book that inspired the movie. Uh, so, , an actual account of when those two Black Hawks and Mogadishu went down, um, a ripping tail, , because they took a lot of the actual chatter from the radios they used firsthand, um, oh yeah.

Reports about what actually happened on the ground and all that sort of stuff in terms of reading a book and then getting a real sense of the chaos [01:54:00] on the ground in the middle of that battle. Mm. And you know, the mistakes that were made and, and all of that sort of stuff. It was a ripping read. , And, , well worth it.

I found it in an op shop for not very much money and, , loved it. Instant buy, , and can recommend. Uh, the other one that I'm reading, I'm about halfway through at the moment, is one called Free Time by Jenny Blake. , And this is a book about how to build and run businesses. Now inside it, I don't agree with everything she's got, but there's plenty in there that's made me stop and think about how we run our business.

Oh, yeah. Uh, and so I do have plans to adapt and adopt some of the things she says in her book for us. If you, uh, I seem to be in at the moment in conversations with people who are emailing us, saying things to me like I've just started my own little consultancy. You know, those kinds of, those kinds of conversations.

Um, this would be a good book I think to read if you are just starting out on the entrepreneurship kind of [01:55:00] bandwagon. Mm. Um, and the other one that I would recommend would be the E Myth. , Oh yeah, yeah. I read a bit of that while I was at your place. I kept picking it up, putting it down again. Yeah. Yeah.

That was useful. Yeah. So both of these books I think, are useful in the same kind of way. They, they kind of help you to understand how to systematize bits and pieces of your business in a meaningful way to be able to get recurring income and that sort of stuff. So that's what's on my list of my actual list of, of actually reading at the moment.

Great. I've been reading breakneck China's quest to build the future by Dan Wang, who is a Canadian who then went and grew up in China and America.

So he speaks Mandarin and Cantonese, I guess, and English, but he's actually Canadian. So he is got this insider outsider perspective and he, it's really interesting 'cause he compares China in their current moment and America in their current moment and sort of points to similarities and differences and the similarities he picks up between the two cultures is actually quite [01:56:00] interesting.

But it's really well written. Um, it's really interesting if you wanna just like, understand our world a bit better, like I do like to read, to put bigger things in context. Um, yeah. And I'm really fascinated by China, you know, dark factories and I'm, yeah, I'm really interested in the competition system and like I love their cars and I just, um, just fascinated by the speed of technological innovation in China and yet also the social issues and anyway, um, you know, a country that can produce an amazing electric car and TikTok.

Yeah, like it is kind of incredible. And, uh, anyway, highly recommend it. , I've been also just reading, I've been reading articles a lot and, um, there's, I'll put a link to AI as normal technology, which is an article, a research article from MIT, which I think is a really interesting idea to think about AI as normal technology, which I think is what you were doing in our previous segment as well.

Alright. Was talking about [01:57:00] it as a normal technology. Yeah, yeah. Um, okay. And then just today, ruper of an article from Hannah, Dr. Hannah Forsyth, who, um, used to be, was taken out in that massive corporate restructure of a CU recently, but he is an amazing historian, wrote a book about the history of Australian universities so well versed to comment on our current moment, coming back to how shit things are at work.

Um, she, the article is called The Communist Takeover of Universities is Imminent, which is pretty in hilarious and it's, it's a reaction to it. I've gotta say laughably bad article by, um, Steven Matchett in the Australian . I mean, it's a terrible article. I'm not even gonna link to the original 'cause it's so bad. But here's an excerpt from what Hannah said. Look, even Stephen Matchett admits there's one or two things wrong that maybe just might have gone wrong on management's watch, but not least a millions of dollars of wage theft that we wouldn't even know about it if it were not for the meticulous Dogg, the boring and difficult work by the NTU, who I suspect is his [01:58:00] real target.

Indeed, he refers to the union and sundry groups of academics, and maybe it's just me, but the word academics here and in the rest of the article just seems dripping with disdain. So, okay, maybe we should perhaps confess that there are many understandable reasons why people perhaps including Steven Matchett, dismiss or annoyed by academics.

Some are a little arrogant. They often believe themselves to be the smartest person in the room, and this too often makes 'em choose to be the loudest in negotiations. They seem to think that no one will notice the self-interest embedded in their arguments. So true. And they seem to think they're too good for the work that everyone else has to do.

They're paid really quite well in Australia, at least largely due to having a pretty great union, but often complain about it anyway. And yet Machu seems kind of obsessed with making this bunch of sometimes elitist and always hierarchical group of teachers into communists. Academics who think that universities should be run by worker.

Soviets are being used to disconnect between the top management and staff. She's quoting matched here. It's a campaign for restructures [01:59:00] of university boards. The twists and cul-de-sacs multiply, wait, hang on. Says Hannah. University management's have fucking up very, very badly. But the real point is that academics are using this.

For what? For communism? No, the point is to warn management to stop fucking up. Take a pay cut and for fuck's sake fly economy like everyone else. And I dunno, but that kind of fucking sums it up for me. Oh, that's awesome. Stop fucking up. Take a fucking pay cut by economy like the rest of us anyway, so it's good.

I'll put the link in. Right? We're gonna have to go out real quick. Real quick. 'cause headphones. Okay. Um, two minutes hits, blah blah, blah. David L's classic, getting things done, blah, blah, blah. Hacker idea. I've got one real quick. Right. Shoot. Zoe Bowman, friend of the pod. Hi Zoe. Thank you for top tip. Both of us are fans of the book, the Curated Closet by Anusha Rees.

I'm gonna give you a link. Um, in this, it gives you a process of how to dress yourself. You [02:00:00] develop a style statement. Mine is sparkling academia. You know how there's dark academia mine's sparkling academia. So it's kind of like cardigans and, and tartan and things, but, um, done in fun colors, right? Okay.

That's my style. That's my style. And also I, I like to wear my lady tradies, you know? Yes. So I like overalls and things. Anyway, I shared a note with you, which I hope you can see. Yes. And Zoe showed me a method. There is something that the ladies do, Jason, on Instagram. It's called a Fit check. Yes. That's where you take a photo of yourself in your outfit, you know, to check out fits Yes.

Fit check. Yeah, sure. And, uh, anyway, so Zoe showed me how to long press on yourself in a photograph that you've taken of yourself. Right? Yeah. Turn that into a sticker. Yep. You just say, turn into a sticker and then you can post it, share it to a note, and then you can have a fit check note with all these little cute profile pictures of yourself.

Yes. Or as in the case that I also posted to you a little sticker of Ginger. Yes. [02:01:00] Which I made. So you can make a fit check kind of collection. Yeah. Um, if you wanna like, review your curated closet and play with your outfits and stuff. Anyway, that's a fun top tip from Zoe. I love it. It's awesome. Okay. I get dressed with an imagined reality of what I look like in the clothes that I pull out of the drawer.

Yes. Uh, I, I, I have this mental picture of what I look like when I wear these clothes. Yes. Yes. Often when I actually wear the clothes, that doesn't match my mental image, but the mental image is the one that I go with. So you don't want a fit check, is what you're saying. I don't even care. I just like, it's not even that you, it's not even that you want one and you can't be bothered doing it.

You just don't even think that Maybe that's a thing, a process step. No. See, this is the gender differences. That's all I'm saying. Fit check. Okay. Okay. Okay. I love that. I love that. I love the idea. I think it's great. I just, it's cute. It's cute. I go pair of jeans black top, like doesn't change. Your curated style statement [02:02:00] would be, uh, it would bojo normal.

Yeah. It's a, it's a little bit, um, you do do a line in bojo T-shirts. This is like a thing you do like, that's your style, not bojo. B jj. Bj. Oh yeah, right. Yeah. Jiujitsu. Yeah. I, you know, yes. Mostly blue belt. Alright. Think, I think if I had to blue belt. Yes. Your style is blue belt. And I would say, and the, and the sub, the subhead of that is jiujitsu, but make it corporate.

Yes. Corporate jiujitsu, right? Yeah. I love that. But once you know your style statement, you go, is that corporate jiujitsu? Yeah. And then you're like, yes, no. And so I look at a thing I say, is that sparkling academia, you know, is it tweedy? Is it kind of a bit fussy, a bit old fashioned, a bit conservative, but in a fun fabric or a fun color?

Oh yeah. So yeah. Yeah. Okay. Rebellious, but create, you know, but conservative. Okay. Yeah. You got one. , Yes. I have been on a bit of a, [02:03:00] a, a journey. , I used to use a piece of software called Bartender. Uh, may have introduced this to you a little while ago. Anyway, , 12 months or so ago, bartender was sold.

The point of bartender is that, , it would manage the little, the menu bar on your Mac. So you know, when you have lots of little items up on the right top right hand side of your Mac, you might have a clock Yes and a yes wifi. I've got there and a but not the useful thing to connect the Bluetooth to my, my, my headphones before.

So, yes, yes. Right. So ICE is a menu bar manager, and so what you can do is that you can pin the useful ones to. Top bar. Up the top there, and then all the ones that you don't care about, you can put in a hidden menu underneath it. Oh. So that when you roll over the little ice block icon, it reveals the hidden menu underneath it.

Oh, right. So the only ones I've got up there at the moment are the little [02:04:00] toggle sign. You know, when you turn it from your focus from, you know, yes. I'm, I'm available to do not disturb that sort. Do not disturb. Yeah. Mm-hmm. Wifi, because wifi is so terrible in my house. I'm never quite sure if I'm attached to it or not.

Sure. Battery Indic Battery indicator and a little calendar notification And timing app. Timing app. They're the only ones I've got up in that top thing. All the others, Bluetooth. Everything else that you might want to like, have quick access to Text expander, um, the scanning software, Google Drive, shutter, all that sort of stuff.

All Zoom all sits in a hidden menu bar. And I just have to roll over this thing just to see it and it expands down and then I can click on the thing and go straight to the thing. Anyway, long story short, I That's nice. I like that From Mac Menu Bar management. , It's free and it's open source it so get up, go after it.

It put the, all my favorite words in one sentence. Yeah. [02:05:00] Link is in the show notes. Excellent. Great. All right. I'm taking us out. Two hours and 17 minutes, maybe he's just gonna get a long one, like, yeah, I don't know. It's not that long. It's not that long. Thank you for listening all this way. Uh, we, it's not that long.

Um, we love reviews, obviously, and we're gonna start reading them out in our special review segment of the mailbag. Yes, don't worry, we've got your fam, we're gonna do a country, country per episode. Alright? Um, so we love them. Keep them coming. Um, if you want your question featured on SpeakPipe like Lily did speakpipe.com/thesis whisperer, you can email us pod at on the reg team.com.

All of this is in the footer of the show notes. Jason, as we've established, is not very online. He's having a break, although he has been putting content on our LinkedIn on the reg, which is, you know, yes, kind of fun. Um, appears at really random intervals. I've gotta say Jason, like, I'm like, that was Jason two weeks ago.

Why am I only seeing it now? So not timely, but you know, it gives you a sense of [02:06:00] what we're doing around the traps. You can find me as thesis whisperer that that might be algorithms, like the LinkedIn algorithm's. Weird. It shows, oh fucking it shows up up. Weird stuff. Yeah, so weird. So weird. Anyway, um, so you can find me pretty much everywhere, um, thesis whisperer.com, where I have a new post up.

For the first time in months. I'm going to get it, get back into it. Okay. Um, it costs us about a thousand a year to produce this podcast. If you wanna support our work, you can be a riding the Must Bus member for about $2 a month on our Cofi site, which we only just worked out, goes to my bank account. So I've been embezzling all the money, um, for, don't worry, I'm paying it back.

Um, since the last episode, Jason hasn't put the list of people who have followed, but there have been people. Thank you so much. Um, again, I think maybe we should put our list of people back in the mailbag. I reckon we should move this to the mailbag, just saying production note. Thank you so much. Okay. Um, this really helps us all.

Thank you for listening this long. Thank you for putting up with [02:07:00] us. Hope you've enjoyed it. We, you gardening driving. Big shout out. Thank you Jason. It's been great. Thanks Inga. See ya.

Bye.