Putting AI to the Test: One Designer’s Articulate Experiment

By: Robin Lucas with Marisa Hubin

What happened when we asked an AI assistant to help build a real course for real learners.

For many of us in learning and development, we appear to be past the hype cycle. Most of us have either experimented with tools like CoPilot, or chatbots like ChatGPT or Claude.  Now, we are trying to figure out what’s useful. But what happens when AI isn’t something that you have to refer to in another window and instead it’s embedded in the tools we use every day? 

This article follows instructional designer Marisa Hubin as she puts Articulate Rise’s AI Assistant to the test. The goal was to see how far the tool could take her in designing a course for new community managers. It wasn’t just about checking out features. This was about building a real course for a real audience. 

What she found was both helpful and frustrating. The AI got her started fast, but it couldn’t stay on message, and it couldn’t retain the big picture throughout the course. The experience revealed how AI can assist, how it can mislead, and why thoughtful instructional design is still very much a human skill. 


Setting the Stage: A Real Course, Not a Demo 


Robin: Can you describe what this experiment was all about? 

Marisa: Sure. Articulate had just released its new AI assistant, and a lot of us were curious about what it could do. I was asked to evaluate its capabilities in a more structured way than others had, specifically to test how well it could support the development of a real learning experience from beginning to end. 

The course topic was assigned and the source content I had was a PDF of our Community Manager playbook. That gave me a baseline to evaluate what the AI gave back. I could check to see how close it came, what I had to change, and what I could reuse. 

Robin: So you weren’t just exploring. You had a specific use case in mind? 

Marisa: Exactly. I wanted to create something fast, but still have it aligned with learner needs. This wasn’t about whether AI could make something flashy. It was about whether it could build something truly usable. 


Initial Output: Fast, But Needing a Human Touch 


Robin: So once you clicked “Start with AI,” what did the tool give you? 

Marisa: First, it gave me an outline of the course based on the Community Manager playbook. Then, from the outline, I had it generate paragraphs of  text for each lesson. I wanted to see everything it gave me up front so I could revise the tone and adjust for clarity.  

Robin: Once you had your content blocks, what happened next? 

Marisa: Once I had those, I could use the Articulate AI assistant to convert that text to any of the standard Rise features like flashcards, accordions, tabs, image with text, etc. However, I quickly found that I had to use a critical eye on those text paragraphs, especially if they were long. For example, I had a large text paragraph, and I was curious what it would do if I asked it to “convert to flip cards” as that’s a treatment that can be fun. However, in that instance, it created six or seven flip cards. It was a lot. And as a trained Instructional Designer, I thought “no one wants to go through that many flip cards!” 

In trying things like this, I learned that the AI didn’t have a good sense of what would feel engaging. I had to go back and simplify or restyle it, so it didn’t feel repetitive or clunky. 


Tone and Voice: Where AI Fell Short 


Robin: You mentioned the tone didn’t always land. What were you hoping for, and what did it give you instead? 

Marisa: I kept reminding myself—and the tool—that these are new community managers. But the tone was often stiff or overly formal. I’d go back and revise things to sound more natural, more like how we’d talk to someone just stepping into the role. 

Robin: Did the AI get better over time, or start picking up on your changes? 

Marisa: No, it didn’t adjust. I don’t think Rise is built to learn from earlier lesson blocks in the course. It seemed like each block was treated as its own thing. So even if I rewrote the tone in one section, it wouldn’t carry that into the next. And sometimes, things started sounding repetitive—like we were saying the same thing over and over. 

Robin: That sounds frustrating. Because this wasn’t just a prototype. It had to work for people stepping into the role. 

Marisa: Right? It made me second-guess where I was saying what. I felt like I was doing a lot of extra work just to make sure things flowed and didn’t overlap. 


Interactivity: Automated Formatting Without Instructional Design 


Robin: Let’s talk more about the interactive design side. Once you had your content in, what other ways did you try to get the AI feature to help you manipulate the content?  

Marisa: The interactive treatment elements that I could add often felt arbitrary. The tool could change the format, but it didn’t consider pacing, length, or user fatigue. It just didn’t think like a designer. 

Robin: So it applied interactivity, but without much logic? 

Marisa: Yeah. It would apply the format, but not with an instructional design lens. I could press the buttons and change the layout, but the AI wasn’t thinking about the experience or how people would use it. 

Robin: How did you work around that? 

Marisa: I’d duplicate the block so I could compare what it originally said with what the AI did to it after the conversion. Sometimes it would reword things or add stuff that didn’t need to be there. It would add filler content just to make it fit the block style. So I had to keep checking both versions to make sure it still said what I wanted it to say. 

Robin: So, it didn’t really understand pacing or variation? 

Marisa: Right. It could technically do the work, but it couldn’t think like a designer. It didn’t understand how the pieces should flow together. I had to go back and make it make sense. 

Robin: Did you test the AI’s ability to generate images? 

Marisa: I did; and it was… interesting. The tool lets you generate images directly into blocks, but the results were often awkward. One time it gave me a character with no head. Another time, the faces had that strange AI look: extra fingers, weird shadows.  

Robin: And I imagine putting that image in front of a learner could be confusing, or even damage credibility? 

Marisa: Exactly. I wouldn’t use any of them in a real course. But I could see it being helpful for quick layout testing or prototyping. If you’re just trying to get a sense of space or alignment early in the build, it’s a decent placeholder. But for real design, I’d go with Getty or Pexels. 


Structure and Flow: Missing the Course-Level Throughline 


Robin: So even after editing for tone and interactivity, did it feel like the course held together? 

Marisa: It didn’t really feel like a course. It felt chunked out and kind of random. Each content block was like its own little island. The platform doesn’t seem to think about the whole course. It’s just responding to what’s in front of it, one topic at a time. 

Robin: So there wasn’t much cohesion between parts? 

Marisa: Right. It didn’t flow. At the end, I realized I’d need to go back and stitch it all together to make it feel like a complete experience. And since I wasn’t the subject matter expert, I kept second-guessing whether I’d already said something or repeated myself. Without that bigger structure, it was hard to tell what was landing. 


AI as Intern: Looks Real, But Lacks Smarts 


Robin: You’ve described the AI in Rise as kind of like having an assistant. What do you mean by that? 

Marisa: I keep telling people—it’s like having an intern. It doesn’t have instructional design skill, but it can do things. It can get you started. But would you send out an intern’s work without checking it? 

Robin: So the risk isn’t that it’s wrong—it’s that it looks done? 

Marisa: Exactly. That’s the tricky part. It gives you something that looks real, like something you could use. But it’s not ready. If you’re not careful, you could easily put something in front of learners that isn’t accurate, or that doesn’t work. 

Robin: That’s a little unsettling. 

Marisa: It is. It’s not bad. It’s just not smart. You still need someone to shape the experience, to make the pieces connect, to think about the flow. It’s a starting point, but it’s not a design.

 

Reflections: Helpful Tools Need Skilled Designers 


Robin: Looking back, beyond just the functionality, what would you tell another designer thinking about using the AI assistant? 

Marisa: I think I’d focus on tone a lot earlier. That’s what took the most time to fix. I’d probably get the voice right first. I’d make it feel more human and then go back and decide which block type helps support that. 

Robin: Would you use the AI features again? 

Marisa: Yeah, I think it’s useful. But I also think you need to know what you’re doing. If someone’s brand new to instructional design, they might not realize when something feels off. Like if it sounds too formal or the structure doesn’t really flow. 

Robin: So it’s not that the tool is bad, just that it needs guidance? 

Marisa: Exactly. It’s a great head start. It just doesn’t know what good design looks like. You still must shape it into something that works. 


Final Thoughts: AI Can’t Replace Human Judgment 


What Marisa’s experiment makes clear is something we keep learning over and over again: AI can help us get started, but it can’t make the creative, instructional, or human decisions for us. Tools like this can be incredibly useful—but only in the hands of someone who knows how to spot what’s missing. It’s not about resisting AI or rushing to adopt it. It’s about using discernment, skill, and a deep understanding of the learner experience to turn something that’s automated into something that actually works. If AI is like an intern, then we still need to be the creative directors: shaping, stitching, and refining until it works for learners. It’s not just about catching what AI misses. It’s about designing something learners will remember and use. 

Recent Posts