Inside Tech Comm with Zohra Mutabanna
Inside Tech Comm is a show for anyone interested in learning more about technical communication. It will also be of interest to those who are new to the field or career-switchers exploring creative ways to expand their horizon. You can write to me at insidetechcomm@gmail.com. I would love to hear from you.
Inside Tech Comm with Zohra Mutabanna
S7E1 When Your Bot Writes Poetry For Execs
We kick off the new season by breaking down the “contractor mindset” for writing prompts — the key to getting AI to deliver to spec, not with unnecessary “flair.”
Chuck Nealis joins us to explore how to write high-impact prompts through the lens of storytelling. We dig into the simple but powerful guardrails that help navigate AI’s context window. Because as conversations grow, crucial constraints tend to fade — and that’s when prompt drift begins.
So what do you do when the whiteboard starts getting messy?
In this episode, we unpack:
- Explicit prompts with clear purpose, audience, and format
- Contractor mindset over poetic creativity
- Context window limits and the 80% summary handoff
- Labeling drafts and starting fresh threads
- Triangulated reviews across multiple AIs
- Verifying sources and rejecting fabricated citations
- Building repeatable, reliable prompt blueprints
- Why consistency leads to accuracy
Treat AI’s memory like a whiteboard, not a library — and start designing prompts that stick.
Guest Bio
Chuck Nealis is a seasoned instructional designer, Organizational Change Management lead, and AI prompt expert with over 20 years of experience helping organizations deploy practical AI‑powered learning solutions and training programs. Chuck is also an author and thought leader on effective AI usage, focusing on precision‑engineered prompting and learning design.
Show Credits
- Intro and outro music - Az
- Audio engineer - RJ Basilio
Hello listeners. Welcome to season 7 of Inside Techcom with Zahra Mutabana. In this season, we have a series of standalone episodes, each tackling a critical issue shaping tech and content today. From hallucination-resistant content and AI adoption to ethics and equity in tech, this season is about navigating change with clarity, integrity, and impact. Let's get started. I had wrapped up my season, but then Donna Cunningham, a very good friend of mine, put me in touch with Chuck. And Chuck had an interesting take on how to do prompting. And there's a lot to prompting. It's not just prompt engineering. I know it's been overused and become cliched, but we're going to do some deep dive because I've talked to a lot of people, a lot of friends at work who talk about how do we prompt? Why does this happen or why does this not happen? So trying to do a little deep dive into what is the nature of an LLM and how do we take full advantage of this environment. I have Chuck Nealis here who's going to, in a very unique way, present to us and contextualize how we should engage with generative AI. Chuck, welcome to my show. How are you doing?
Chuck:I'm doing fine, Zora. How about yourself?
Zohra:I'm doing very good. Thank you. Thank you so much for taking the time. We've had a lot of back and forth. You've put a lot of effort into this content. I'm happy to have you on my show and have this one-off episode to learn and to share. So, Chuck, give us a little background about yourself. What do you do?
Chuck:Sure. So, for about the last 20 years or so, I've been an instructional designer, change lead, and uh on occasion a tech writer. I'm ProSize certified in change management. And I learned early on the importance of user adoption. And it's not only important the information that we're providing, but also how we provide it. And so more recently, I've written two books. I've authored two books on Amazon, Instructional Design Companion and Developing a Growth Mindset Culture. Instructional Design Companion is a 349-page desk reference book. So it is extremely important that all of the information that I used was accurate. And so this is setting us up for the remainder of the uh discussion. Uh, my approach to AI is practical. So while I was writing Instructional Design Companion in 2023, if you think about it, ChatGPT was in its infancy, at least the PorscheNet was open to the public. Originally, I was going to create a quick reference guide for myself on how to approach SMEs during or each stage of the ADI process. I realized early on that it was kind of daunting for me to do this alone, so I decided to use ChatGPT. Uh little did I know at the time how much ChatGPT hallucinates. I mean it's tremendous. And it was even worse back then. They've got a little bit better since then. So over the course of the next year while I was uh completing the first book, I had to develop my own framework. So I used something I called structure decomposition, explicit instruction, triangulated review, and sunset reset. And my accuracy level went way up. And I was able to complete the book, cite all the resources used. I want to say 98% accuracy, but how do I know what percentage it actually is? It's very accurate. So anyway, I've been contracting, you know, in the interim, and while I was authoring the book, I've continued contracting. My newest book is going to be released soon this year, and it's titled Two Stories of AI Gone Wrong. Or Was AI Hallucinating When It Wrote Them? And so this is a very interesting book because it is stories that represent the problems, and then we discuss how to fix those problems. I think it's going to be a very, very good book. Anyway, that's pretty much more about me.
Zohra:Thanks, Chuck. And I'm a big proponent and follower, if I may say so, of the growth mindset. And uh I love that, and I get that vibe from you, which is awesome. And then also the uh congratulations on the books that you've published and your upcoming book. I've written one book and it's a lot of work. And it was nothing to do with technical writing, it was on um decluttering, actually. And it's also on Amazon, but I do not promote it or anything. But it's a lot of work. The the idea that you talked about structured decomposition, sunset reset. I'm very intrigued. And I think the triangulated review, all those things we will dive into, but we are going to start with stories. Is that right, Chuck?
Chuck:Sure. I can uh start with the first story.
Zohra:Yes, please. Let's start with your first story.
Chuck:Okay. So story one is the night of the haiku, and I'll just get right into it.
Zohra:Go for it.
Chuck:Story one. It was supposed to be an easy afternoon, no drama, just a quick update to the release notes. My newest tech writer, Jade, was finishing up. She turned to our AI insistant. Hey Rex, she said. Write a quick known issues section. Rex, our ever-played AI, replied, on it, would you like an empathetic closing? Uh-oh. That should have been the warning sign. Anytime an AI offers empathy, something ridiculous is about to happen. A few minutes later, Jade refreshed the document. There, sitting neatly under the heading of our document was a poem. It said, Servers, like people, sometimes need a quiet night. Restart respectfully. That was it, a haiku, a beautifully formatted, well-intentioned haiku, inside our production release notes. I laughed so hard my ribs hurt. Jade was horrified. It's not funny, it's live and staging. I wiped a tear from my eye. We just published emotional support poetry for the IT department. When I finally caught my breath, I asked the only question that mattered. What did exactly did you tell it to do? She said, Write a quick known issues section. Let's stop there. What happened? What went wrong?
Zohra:Release notes having haiku. That would be hilarious. I think that would also humor the audience. The more technical the audience probably it will humor them. But what exactly did happen here?
Chuck:So what happened is AI was given a very vague task. It wasn't given proper instructions. So it had to choose to be creative instead of factual. Whenever you lead that void, AI has to satisfy it with something. So it just made it up. So Rex wrote a poem because it wasn't told what kind of text to produce or for whom. It filled the silence with style.
Zohra:And when we say rex, we are alluding to a generative AI tool.
Chuck:It's just what they call their generative AI tool. Instead of uh saying ChatGBT or Claude or Gemini, I decided we'll call it Rex. And I'm sorry for not making that clarification earlier.
Zohra:No, no worries. I just wasn't like you said, we just want to clarify that this is just, but it's a fictional true story as we are calling it, right?
Chuck:Right. Yes. Right. Well, I mean, it's fictional, but the solution is factual.
Zohra:Ha. All right. So vague prompting. That's what you're alluding to, right? With this story without the context.
Chuck:Yeah, story one is kind of simplified, and yes, that's what we're alluding to.
Zohra:Perfect. Now, when you and I talked about this, we also you brought to my attention treat AI like a contractor, not a poet. I want you to give a little more, I guess, color to it. What did you mean by that?
Chuck:Well, you have to provide explicit instructions and a proper framework for the AI to run off of. I think that's what I meant when I said that.
Zohra:Like a contractor and not a poet. Okay. Yeah, and that makes sense. Since you want a factual output, you've got to guide it. And I've had many conversations where we talk about how do we control the output that we want. For that, you have to have your prompt also controlled as an well-organized, well-structured. That's what I think I would mean. Now, as I you were reading this uh story to me, it this is a scenario that happens many times. We've read of events where Chat GPT has given imperfect, inaccurate outputs. And you talk about this idea of structured decomposition. Yes. So I want you to kind of take us through at an LLM level. We did talk about okay, the our vague prompt got us this vague output, but maybe more at a technical level, not very technical, but at a high level. Explain to us what happened here.
Chuck:Okay. Structured decomposition will come into play in one of the other stories more than in here. I mean, I can I can get involved in a discussion of it.
Zohra:Let's put up until we get to that story.
Chuck:Okay, fine. With regards to what went wrong from a process standpoint, so the core problem was a complete lack of uh explicit instruction. Let me explain explicit instruction because this is extremely easy, but it's extremely important. So, you know, a lot of us were told to be nice to the AI if you want nice results. And we're told be good to your AI and it'll be good to you. And I've heard so many of this. Anyway, that's all nonsense. You have to give in explicit instructions. You can't say should, would, maybe. These are not explicit instructions. The United States Code of Federal Regulations, it's all explicit instructions. It's must. It's very legal ease. You must do this. Not anything that opens itself up to interpretation. It's straight, straight facts. If you provide your AI with those straight facts and you say must, and you say whatever other explicit instructions are, then you are going to get phenomenally better results. Just that one change is going to give you much better results. So that's what explicit instructions are. So I've also developed some prompting tools. I have some sites with sophisticated prompting tools that many instructional designers are using right now as their go-to prompting tool. And in this, the prompting tool um takes into account purpose, audience, output format, and many other items. I think there's nine in total. And uh these are explicit instructions. These are explicit instructions that give the AI a path where it can't detour easily from it. So that's the critical takeaway from this first story, I think.
Zohra:Yeah, and like as you were talking about the different items that you need to keep in mind to craft a prompt, the purpose, audience, the constraints, the output formats, another thing that I can think of as examples for voice, tone, for style, so on and so forth. Right. And you don't have to use all of these all the time, but having these guardrails and a format to your prompting will definitely give you more control. And like you said, the explicit prompting that you talk about.
Chuck:Can I give a little analogy here?
Zohra:Absolutely, please. Yeah, go ahead.
Chuck:So if you think about, let's say, hopefully everyone knows what a levee is. So let's say you're walking on a levee and you have to get from point A to point B, and then someone puts a blindfold on you. So without clear instructions or without someone guiding, you know, in real life, without someone guiding you, you're gonna have a problem and you're going to go to your left and want to fall down that hill and go to your right and want to fall down that hill. But if you're given a guardrail, like you said, proper instructions that keep you on that path, then you're less likely to fail. Does that make sense?
Zohra:Yes, that makes sense. Another childhood game that I remember is hot and cold, hot, warm and cold. Um, you would blindfold, and then you have to find something, and then your group would guide you eventually to get to that location. It's something like that.
Chuck:I like your analogy better.
Zohra:Oh, thank you. And it just came to my mind, I was thinking about it, how you have to guide the LLM and that analogy. As you were talking about the Levi analogy, that came to me. So I'm glad you're using these real life examples to explain how AI works under the hood. Yes. Yeah. So thank you for uh you're not taking this storytelling approach. I'm not very good with storytelling and using metaphors in my explanation. So this is this is something new that I'm trying out myself.
Chuck:I found it a necessity as I'm really bad at talking in public. So I use storytelling as a method of getting my point across.
Zohra:Yep, that makes sense and that builds bridges. And I mean you get it clicks immediately. So it clicked for me for sure. Moving on to the next story, how about you read the next story to us?
Chuck:Okay, sounds good. Here we go, and let me clear my throat. It was Friday afternoon, and a critical report for our execs was due toin an hour. Jade called me over, pointing at her screen. I think the AI is haunted, she said. I looked. The report was a Q3 performance, but the AI Rex was confidently inserting data from last year while labeling it as current. It was ghost in the numbers. I scrolled up. The chat history was endless. It's not haunted, I said. It's just run out of whiteboard space. I explained that an AI's memory, its contacts window, is like a whiteboard, not a library. At the start of the project, you write your most important rules at the top, two, three data, only formal tone, source, finance dashboard. But as a conversation continues, you fill the board with new notes and drafts. Eventually, to make room for new thought, the AI has to erase something. Without clear instructions, it might erase one of those critical rules at the top. That's what happened here. The Q3 only rule got wiped away. That's a good stopping point.
Zohra:All right. Going back to our storytelling and how that helps us connect, the whiteboard analogy again helps me think about oh, that makes perfect sense to Chuck, what you're trying to talk about and what we are going to be talking about. So this image will it has helped me understand, but it also tells me the in the specific story, what actually happened to cause the AI to become haunted. Why don't we talk about that?
Chuck:Yeah, so what happened? The reason why the AI started inserting old, irrelevant data is because it was a very long conversation. So the chat must have been open for a long time. Um, and it had forgotten the most important role. And that would have been to be using Q3 data. Chat GPT, Claude, Gemini. I mean, these are a lot of people use perplexity as well, right? But they're the main, the main AI that we use. And so uh Claude and Chat GPT have a context window that is about one-tenth of Gemini. It doesn't mean that Gemini is that much better because Gemini has issues of itself, but I'm just saying Gemini has a longer uh context window. So having said that, within that context window, an AI has only so much room. So even though Gemini's context window is that much larger, still, when a conversation with a chat with an AI gets too long, it's gonna start hallucinating. It doesn't matter which LLM you're using, it's going to start hallucinating. So, context window is extremely important. And I'm gonna give you another analogy if you would like.
Zohra:Absolutely go for it.
Chuck:Hopefully, a lot of the audience has seen the old I Love Lucy reruns. I can give you another analogy if you haven't seen I Love Lucy. But the AI's context window is like Lucy on the conveyor belt. If anyone's seen the chocolate factory episode where uh she's trying to wrap the chocolate and get it on a conveyor belt in time, she eventually starts eating the chocolate herself. Anyway, but the the AI's context window is like Lucy on the conveyor belt. She can handle the chocolates in front of her. As more and more chocolates come down the line, she gets overwhelmed. And the ones that pass by are gone forever. So it's the same with AI. Once your original instructions scroll off the end of that conveyor belt, they are effectively forgotten. And that's exactly what went wrong in this story. The original role of Q3 data only had gone off the end of the conveyor belt. So AI literally could not see it anymore. And so I think that's a really good illustration.
Zohra:That is a really good illustration. And I think going to the practical side of this, you know, as when I started using generative AI, these were things that we had to learn over time. And then you had the tokens. There was a token limitation with Chat GPT. So in this context, what we are really talking about is what can the AI remember, right? In simple terms, what can the AI remember? And then the longer your conversation gets, the more it hallucinates. So you have to bring it back on track. And to do that, there are some strategies. And you you did talk about how to keep that context window in mind. Some things that I do, for example, uh with Copilot. I use Copilot, Microsoft Copilot at work, and it has this memorize option now where you can say, Hey, memory remember this, which is that's perfect. That's so amazing, right?
Chuck:Yes. Yeah, you can actually do some of that functionality with each AI now. You can ask it to remember certain things.
Zohra:Exactly.
Chuck:You can also overuse this capability. So you want to limit it, at least right now. You don't want to bug the system down with too much uh uh data to remember across uh chats.
Zohra:Definitely, but I I want to kind of understand from your perspective in what situation could one overuse this?
Chuck:Well, I don't know about Copilot, but I know for Chat GPT and Claude, well, Claude, to a lesser effect, but ChatGPT and Gemini, you do have that capability now. You can remember things on your account, you are only allowed a certain amount of data that you can store. And if you get too much, then you have to go in and eliminate some of the old data. So as long as you do that, I guess you're better off. But I I have a better solution to everyone than to take advantage of that necessarily. And if this is to keep your AI accurate, and the way I do that is towards the end of each chat, actually at 80%, when I Know I'll if it's Chat GT, I'll I'll say, are we about eighty percent done with this chat? I know that that is the area where AI starts hallucinating. Is it around eighty percent, maybe even less? It depends. Of course, it depends. Every situation's different. So I'll ask for a summary, and I actually have a couple of prompts for that to get a real good summary. The summary it's gonna give you it is gonna be one page or two pages, unless you ask it for explicit, you you provide it with explicit instructions on what you need. And so there are prompts for that, and I I could provide that to anybody. There are definitely ways to do that. But so you're gonna take that prompt and you're going to run it, and then you're gonna take that data and you will have another prompt that you put into the new chat. And this works extremely well. I use it all the time, especially if I'm doing complex things where you know that you need multiple chats to complete it, the task, then this is a real good way of doing it.
Zohra:I love that you brought this up because that's something that I have started doing as well, especially with longer conversations. And we all have our go-to prompts where we ask the tool of choice to summarize and then what we can take from that summary and start a new conversation.
Chuck:Exactly.
Zohra:Right? That's a great strategy. Are there any uh essentials in your prompts that you would like to share at a high level that somebody can use?
Chuck:I mean, I would love to. Is there a location in the um we can definitely share it in the show notes?
Zohra:So how about this? It would be great for the audience if you can share some examples of prompts that you use to summarize a long conversation.
Chuck:Absolutely. I would be happy to do that.
Zohra:That's awesome. So this is where you touched upon the 80% rule and context handoff.
Chuck:With that, I yeah, yeah. I mean, I can't say that's a hard, fast rule. That's what I use personally, and I've had good luck with it. I know that you cannot take a chat to the end of the chat without having some sort of problem. You know, if anyone says that they don't experience any problems towards the end of a chat, I mean that's great for them. But I I know in, you know, what I've noticed is that you do. So I've set that rule of 80%.
Zohra:Yeah, I think you're right. It's not that you're gonna run into something like this in every scenario or every conversation. It's just it depends on how long the conversation is and how detailed and the depth.
Chuck:Right. There's one other consideration here, too. If you go too long, then you're not gonna be able to add any more content, and you're not gonna be able to get that summary we're talking about.
Zohra:Right.
Chuck:Well, and and then so what are you supposed to do? Bring that whole conversation into your next chat, then you're gonna be at the end of that chat. So yeah.
Zohra:Another strategy that I've tried that maybe I should share with you is let's say I'm reviewing what's new content, and that's not a long conversation, but sometimes I have multiple pieces of content that I want reviewed, and I will obviously open probably a different conversation thread for each. But sometimes I may have a long form article that I wanted to review and give me a draft of the first iteration of it. And let's say I like it, I will label it, I'll call it draft A. So then once let's say I'm 10 levels deep and I want to go back to draft A, instead of copy pasting that and trying to use fewer tokens by labeling that piece of output, I can reference it and then it refreshes itself. And I've had success with it. So that's a strategy that I've tried and I've found it very beneficial.
Chuck:Awesome. That is awesome.
Zohra:So I think we all come up with our own creative strategies, and that's what we are trying to share with the audience here that you will come up with your own strategies, but these are some strategies that we've tried, tested, found some success.
Chuck:Yes.
Zohra:I think we are ready for story three, unless you have something more to add on story two.
Chuck:I think we're good. So each of these stories, it's not the complete story, because a complete story would provide the solution, etc. It gets more involved. So all we're doing is an excerpt from the story, and then we're going into discussion.
Zohra:Discussion, right.
Chuck:All right, so story three is a two-faced intern. Here we go. So while writing my book Instructural Design Companion, oh, incidentally, this is true. So this is the one story that's actually true. While writing my book Instructural Design Companion, I was using a creative AI to help speed things up. Now, in an effort to not offend any specific generative AI, we'll just call this one Muse. So one afternoon, I needed a statistic on corporate learning to round out a chapter. Muse delivered a perfect one. A 2023 study by the Global Learning Institute found that 82% of managers report a skills gap. It even provided a plausible looking source. It was beautiful, it fit perfectly, I thought. This is amazing, and I almost published it. But I have a rule that has saved me more times than I can count. Never trust, always verify. So I copied Muse's perfect sentence and got ready to check it. Okay, let's stop there.
Zohra:All right. So it is always a good thing for us as content creators to verify if it's giving you any data. One is to check the link, but what is the source of truth? What did you find?
Chuck:So what I started to do when authoring Instructional Design Companion is I started to involve all three AIs to see which one was the most hallucination-free. And what I found was they all hallucinate, but in different ways. I mean, they all provided stories, but it just happened at different times. What I decided to do was to uh provide portions of a, let's say it's a chapter, which I hadn't used ChatGPT's help in authoring. I take it to uh Claude and say, Claude, this is what ChatGPT uh gave me. Can you check it for authenticity inside the source? And so if Claude gave me the same source as ChatGPT did, I knew it was a reliable source because they can't, they're not both not going to hallucinate the same hallucination unless they're on the same acid trip.
Zohra:That's a good point.
Chuck:So, anyway, I would also use Gemini in the same way, and so this um triangulated review uh wound up being very helpful. Plus, I mean, of course, I would check out the sources myself, but this provided a great way of using AI and ensuring that your your materials that you're using were correct.
Zohra:So, what went wrong here? Uh was it a bad prompt again or something else?
Chuck:Not necessarily. I mean, AI will have it was yeah, it could have been a bad prompt. It could have been that I didn't give it enough explicit instructions. I don't remember exactly what the cause was of the hallucinations, but I don't think you really need a cause. It can hallucinate at any time.
Zohra:At any time. I think so. Your point really being here is verify. And you verified using three different tools.
Chuck:Yes.
Zohra:So that's one way to verify, and of course, looking at the source. Where is this information coming from? Right. So you're doing your fact-checking your content.
Chuck:Right. Once I asked Claude to give me uh to back it up and give me a source, then I did have a source to look at myself. So it made it easier, and even with this painstaking process, it was still easier than me having to go look for the information myself from sources, you know?
Zohra:Yeah.
Chuck:So getting that verification was extremely important.
Zohra:Yeah, and besides the workflow that you shared of checking your content across three different uh tools, is there anything else you would add to that workflow to catch these convincing uh lies that the AI outputs?
Chuck:If you follow the first two stories, the tools we use for mitigating the hallucinations, and then you use this third tool, then I think that you're going to minimize it. But to be honest, in my book, there is going to be 10 stories, and each of them is going to have a different method. So I I I mean there are certain things that we could do. Yes, I think the prompt is very important. So yeah, on my site, perfectprompter.com, and I you don't mind. I'm not trying to queen this, I'm just saying if anyone wants to go in here and check it out, it provides some documents and it also provides an AI engine that I built for anyone. It's free if anyone wants to create a perfect prompt.
Zohra:And you will share that detail with us, and I will put it in the show notes as well.
Chuck:Okay, sounds good.
Zohra:Let's do that. Yeah, absolutely. The more resources that we can provide through this podcast of mine, that's my goal, so that we can all learn from this experience and you know, correct the course while we can.
Chuck:That sounds awesome. And and you know, really what I'm trying to do is just enable others so that they don't have to go through what I went through.
Zohra:Exactly. Right. And we all have to learn from each other.
Chuck:Right. I mean, it was like the year from putting that book together was extremely tough because I did have to validate all the sources and everything.
Zohra:Yeah, I I was thinking about that. And this is uh one question that I had on my list. So as you are writing this book, have you discovered any other recurring failure patterns that could be avoided with a few changes, apart from what we've talked about, what we've covered so far?
Chuck:Consistency is extremely important. And what I mean by that is if you know something works, make sure you replicate that each time. Like, for example, if I did a chapter and I noticed, well, this works really well, then I would replicate it. And if it worked really well again, then I would know that I'm onto something. And I think that that sort of experimenting, and you can call it like A B testing if you want to, but not necessarily so, because there wasn't always a B. But you get the idea?
Zohra:Yeah, I do get the idea, and that's such a great piece of advice. That consistency, if you found success, replicate. Yeah, definitely. We have kept this conversation at a you know, with with the fun storytelling part. We've covered quite a few things: good prompting practices, the context window, the guardrails that you need that you should consider while prompting. Anything else that you would like to add, Chuck, to this conversation?
Chuck:Maybe just reiterate the most important thing that you can do, and that is to uh keep your chats at 80% or less in length. You don't want to extend beyond that. You know, the reason why you don't want to extend beyond that is because the chat might be acting up and you don't even realize it until you check your results, and so you could be wasting a lot of time, you know, and then you go back to verify and you find out that for the last 15 minutes or 20 minutes, the chat was um not completely accurate. You know, we have nothing to lose because what if you're paying $25 a month, you know, for whatever AI, AI of your choice, they're all right around $25 for you know, if you're paying that amount a month, it doesn't matter how many chats you have. It's based on either token usage, normally token usage. So you could have 20 small chats or 10 long chats. Why have a long chat where at the end of that chat it's got a good chance of hallucinating? Right, you know, whereas if you use a short chat, then when you're at 80%, you have enough room left over to where you can ask it to summarize. Or if you have a prompt in place, you just input that prompt.
Zohra:Awesome. Now, while we are on the metaphors, there was one that you mentioned at the start, which was, I believe, the sunset reset metaphor. And I wanted to make sure that we came back to that.
Chuck:Okay, so sunset reset is uh pretty much based on context window, and it's just another way of looking at it. So if you think about the sun when it comes up and you capture all your information during that time frame, when the sun sets in the evening and your information is no longer visible, a chat is very similar to that. You want to make sure that you I would prefer to use the Lucille Ball chocolate factory conveyor belt.
Zohra:Compare that. Okay, but it's a similar second.
Chuck:Yeah, it's very similar.
Zohra:It's very similar. All right, all right, got it. I think we probably mentioned it at the start of our discussion, so I just wanted to make sure that we had covered ground and covered anything that we had not touched upon in throughout the storytelling and the questions that followed. I think I have covered what I had in mind for our conversation, Chuck. On a final note, is there anything that you would like to add?
Chuck:Well, only that I can provide a lot of information and resources for anyone who's interested in, and it's all free.
Zohra:I appreciate that. That would be awesome. But for the things that we did uh request from you throughout this conversation, we will be providing those in our show notes. The access to the tool that you talked about, we will be providing these stories just so that you can read and enjoy again all the other resources.
Chuck:Okay, yeah, I would like to say something. So on LinkedIn, in two months, I have a blog on LinkedIn uh called uh Daily AI Blueprinter. And it's amassed close to 1,500 subscribers in like two months. So each week I was coming up with uh new blueprints. I've kind of paused that recently because to finish up my new book. But anyway, uh you can find me there. It's called Daily AI Blueprinter, and it provides a lot of uh useful uh tidbits of information if anyone's interested.
Zohra:Absolutely. And again, this is another plug that we will have in the show notes. So we will capture all of this information for the audience and make sure it's all in there, Chuck. We'll be in touch. Thank you so much for being on my show, taking the time and putting this effort into creating this storytelling and bringing more clarity through storytelling, actually.
Chuck:Thank you very much, Zora.
Zohra:Listen to Insight Techcom on your favorite lab and follow me on LinkedIn or visit me at www.inside techcom.show. Catch you soon on another episode. Thank you for listening. Bye-bye.