Hey, I'm Ross Garner and it's another year of The Good Practice Podcast we've left the last decade behind us, and said goodbye to political uncertainty and international tension. So that's the result. This year we're going to have more great guests, including return guest and author Mirjam Neelan. Hello Mirjam
Hello. Thanks for having me, and happy new year
Happy new year your very welcome back. And back in his usual position as regular contributor is my friend and yours Owen Ferguson. Hello Owen.
Hello Ross. Happy new year.
It's a wonderful time of year. Now Mirjam regularly, blogs, tweets, and discusses the importance of an evidence informed approach to learning design. And her new book Evidence-Informed Learning Design is out next month. Co-written with Paul A. Kirschner. So Mirjam do you want to kick us off by explaining why you decided to write this book, what's the need for it
No, I was just bored, and I wanted to become rich
There's no faster route to riches than writing a book about L&D.
Yeah. I know. It's interesting because people have asked me this question, and honestly, the main reason is because I just like writing and doing the research, but then so why a book? So I had to dig and it's still, I think mostly for my own professional development. But I think I must admit that it's a bit from an ideological perspective, as well as in that I think learning is really important for so many reasons. And I just hope I can contribute a little bit to driving change in the field because I still feel that there's a lot of work to be done. Yeah. So I confess. That I think that's a bit of that as well
Okay. No one that follows you on Twitter will be surprised to hear that you have strong opinions about the use of evidence in L&D, or the lack of evidence in L&D. Because you will get into some quite heated debates early on. So I wonder how do you think that we're doing as an industry
Well, I do think that there is more openness towards the evidence now. I feel that there's more willingness to change. I think we're now in this messy state where people try to do what I would say the right thing to increase the quality, or the effectiveness of our work through using the scientific evidence. But at the same time, I think at this point there's also another side of it where, for example, things like neuroscience are really hyped up. And then people think they're doing the right thing, and are actually doing something completely nonsensical again. So I think that's a bit of a shame. So I think there's just a bit of a mix now of, I think we're in the trying to do the right thing, or trying to mature state at the moment
I think maturity is probably the right way of putting it. It's almost like we're now ready, or at a stage of maturity to become a true profession. All the things are there, but we're still at the very early stages of finding our way of about what it means. And so there's lots of misinformation there's lots of fad following sitting there at the moment. Before we get right into it, I want to congratulate you Mirjam on an excellent book. We seem to have a rich seam of really excellent L&D books in the last few years. And I find my list of recommended reading growing, but I am definitely going to add this one to the list. Because it's a really nice mixture of outlining the current state of play when it comes to evidence-based learning design
And also how to practically make use of the best current evidence that we have, and how you go about assessing what's good and what's not. There is a lot in there. I learned loads just from reading it
Oh, really. Okay. That's nice to hear
It was really good. Thoroughly enjoyed it.
Yeah. I wish I had said something more positive about the book in my introduction rather than just get straight into it. But thanks Owen for covering that. I wonder you bring up neuroscience, I wonder how far we are all to an extent we believe what we believe and have done for years. Now we have terms like neuroscience. We can attach to it. We can use these to legitimize these beliefs without necessarily understanding what it means, but a few other folk understand it either. So there's a limited risk of being challenged on what we say
Yeah. And it sounds very scientific. Especially, I think the neuroscientific stuff because it's all backed up with like cool looking brain images and it really sounds scientific. I think it's risky
Let's go a little bit into that just, what are some of the problems that are in neuroscience in particular because you do take particular aim at that in the book
Yes. Because there is very little practical implications for learning at the moment from learning sciences, except maybe things like that sleep is good to consolidate memory. But otherwise the research in the neuroscience field is still very much in a biological state and not in a behavioral state
Even the sleep thing, we didn't need neuroscience for that. There's other ways of getting to the root of that. Neuroscience, electrical, chemical processes that happen inside the brain. Like you say, at the moment, those researchers are focused on things like what causes Parkinson's. They're focusing in on wide population, mostly medical science stuff. There's not an awful lot of neuroscience research going into anything beyond just the very basic building blocks of learning and mind
No, there are like the professor I interviewed for the book, Daniel Ansari, he focuses on how children learn math. So that's very specific, but those questions are still, and he's the one who told me that there are very little, basically, none practical applications. And he is working as a neuroscientist in the learning field. So yeah, to me, it's pretty..
What does he write on his grant submissions I wonder? Not that phrase presumably.
But that's what he explained to me a neuroscientist ask very scientific questions. So for example, well, I don't remember the exact example because I don't even fully understand what it means. But let's say it's something about why does a number block development, develops bilateral instead of unilateral or whatever. And then they going to figure out what... They're going to do imaging, and they're going to look what the brain does how it's activated, and what happens. Well, okay, that's interesting. It's great questions and it will bring us further, but it doesn't have any practical implications for learning
Sure. This area brings us nicely to the question of medical research and the comparison with L&D. Now, Owen has regularly held up medical research as a good model for L&D to follow in its approach to becoming more professional. So Owen, I wonder if I don't want to put words in your mouth. So could you summarize your thoughts on the parallels there? And then Mirjam has some interesting views on this in the book
Well, I'm not sure that L&D can't deliver the same evidence that you get in medicine. I'd say that it doesn't at the moment. So something gets in the way, if you have a look at how medical research works, it's a way into actual practice then you tend to see this kind of pattern. So you've got small scale laboratory condition experiments that show some kind of promise. And then you tend to have off the back of those promising results lots of small trials are run in constrained environments to isolate effects. And these are all in the research side of things. They test a range of variables. And then those positive results are tested in small scale trials in real world conditions, so it starts to make its way into practice. And then they're tested in larger scale trials and reworked conditions
And eventually new practices or medicines are adopted through the dissemination of that evidence, and the endorsement from professional bodies. And there's loads of things that are part of that process that we don't have in L&D. So while we've got some broad guidelines, broad principles that you can look at and say, ""Well, this provides guidance to our practice."" Like the benefits of space strategical and practice, the difference between novice and expert learners. You very quickly get into less certain territory where there's lots of small scale experiments, or research conducted in very constrained environments over very constrained variables. So I personally think the academic researchers can do better. I'd like to see more liaison between academia and practitioners. You get that naturally in medicine because a lot of the time researchers are practitioners, trials are run in very specific... The trials are run in there in the environments that the work is actually taking place. So they run, they tend to be run either in hospitals, or through general practice. So the people, the patients are exactly representative of the population that you're trying to help
Whereas in the learning sphere outside of some well replicated really high quality evidence, a lot of the time the initial research is conducted with just populations of students. It's the problem of psychological research writ large. And so I personally think that we could see more coming from, more liaison between academic research and practitioner
So do you think it's only a matter of time really, or I don't know. There's not the same urgency take it at corporate L&D research as there is in medicine. No one's going to die from not getting an optimized learning experience. If you take it a little bit further than that, and argue in the book that L&D can't deliver the same quality of evidence as clinical practice
Well, I think because I just read an article about this as well, so I kind of want to nuance that a little bit. I think there are in the medical field there are also a lot of complexity sometimes because we're also dealing with human beings, and we're dealing with sometimes environmental factors, et cetera. Let me just take a step back because in the book we say that in the learning field we are dealing with learning environments where we have many different, often difficult to control variables that also interact with each other. I think there are instances in the medical research where that happens as well, or where that's the case as well. But I do think in learning we don't have enough knowledge yet around almost like what we know about anatomy, for example. We're stuck with the brain. So that's where complexity already kicks in for the learning side of things. And then we're dealing with all the environmental factors
So I definitely think we can do better, but I still think that learning is more messy than the medical world although they have to do with other complexities they have to work with different governments, or whatever to get everybody on the same page. So this article I was reading by, I don't know if I'm pronouncing his name right. Robert Slavin. And so he's saying that the main reason why learning sciences, or educational sciences are behind compared to medical science is because of the people who have the power. They are just not as interested in it, they just don't think it's as important as curing diseases. So I think that is a big problem it's just not on top of the list
Yeah. It's maybe hard to argue with that position as well. Again, no one's going to die, but I think another problem is that when
It's a bit of a short term vision though.
Yeah. Well, maybe, but Good Practice, have products and we monitor how people use them. And sometimes we publish some stuff, but we don't publish everything. We don't want to give away everything that we do to our competitors. And it is also the case if you do any kind of learning intervention within a private company, often they're not willing to share that there's no one there that's tasked with publishing it. And even if they work with a researcher, they may not want to go into enormous detail, or publicize what is that they're doing. Because they're doing it in order to secure to some extent competitive advantage. And so there's a bias against publishing
Yeah. I think in work based learning that is a big problem. The whole commercial interests that are part of work based learning. And even in schools that's starting as well with all the educational technology companies that are selling stuff. I don't know now I'm opening a can of worms because you have the pharmaceutical industry as well. So I don't know.
So let's then take that as a starting point for the rest of the discussion. So I think it's safe to say that research into how people learn it doesn't have the same maturity as medical research. And so that being the case, how should we approach the research that does exist, and how else can we take an evidence led approach? So maybe I'll split the question in two parts, so it's a bit easier. So how should we approach the research that already exists
I would say read it, or at least read the research to practice work that's out there. And then based on your experience, and based on the context you work in, really think about how it applies to your environment and your learning audience. And experiment, and evaluate, and iterate, and scale. I don't know. It sounds simple. I know it's not, but I think that's the only way we can get better
Yeah. There's also a substantial proportion of Mirjam's book covers a lot of this stuff, really well. Basically, you need to assess the credibility of the evidence, first of all. So you need to look at things like is this a small scale study in constrained conditions that hasn't been replicated yet? That should go in the pile of interesting, but not really ready for prime time pile. Is it well replicated, well understood, practice that has been tested multiple times with multiple participants over a long period of time? Well, that should probably form part of your regular practice. But this is true of all research. It's never going to perfectly fit into your own particular context. So you do need to keep abreast of what's out there in particular, the stuff that's well adopted. And make a judgment call about what you decide to put into your learning design, and then monitor that evaluate it properly to make sure that you're doing things that actually work within your own particular context
You do understand that this was intentional. That I skipped that step so that Owen could explain it because he can do so much better than I can.
Now I'm wondering how far, if we are to professionalize, how far there's an onus on us to report on what we're doing at work. So it's all very well to read the research, try something in the workplace, measure how it goes and then scale it. But should we tell people about what we're doing
I think that's an excellent idea.
Park the commercial considerations for a while. What do you think about that? You think yes?
I think yes. Yes. Also I think I would love to partner with universities and run experiments at the workplace. And I'm sure that happens as well. We interviewed one researcher who researches self directed learning, which is from that end a bit more easy between brackets to research in a authentic context. I think that would be brilliant if we were able to do that more, and if organizations would be willing to invest in that as well, which I'm sure some of them do
Yeah. I think to solve the problem of being able to share high quality, large scale, experimental or trial, or practice based research, we are going to need to see more collaboration between the academic circle and the practitioner circle. There's no getting away from that. And so I guess that's a challenge for, I put that there is a challenge to professional bodies. It's about how they can bring those two communities together. I suspect the academic community is more interested in they're driven by a different set of needs from the practitioner community. And I think that that's one thing that you don't see as much in the medical side of things, because as I said researchers are practitioners, so they are concerned about the same things
Yeah. I think from my experience of doing the masters researchers are largely concerned with, well predominantly higher education, which doesn't necessarily translate. So they are focusing on a very different context to the one that I was writing about, and studying for the past three years to the point it was difficult to find literature for a lot of the areas that I was interested in. And they also operate in completely different spheres in the sense that I'm part of a learning technologies meetup group. It's not affiliated with the organization Learning Technologies, we just called it [Learning Technology Research Group 00:20:12]. And so there is about half the group come from the University of Highlands and Islands because I'm based in North of Scotland. And they all go to the Association of Learning Technology Conference each year, which I hadn't heard of. And the side that I was part of the corporate side all enjoyed the Learning Technologies Conference in London, which is a corporate L&D conference
And they'd never heard of that. And we both considered these two conferences to be Europe's biggest learning conference. Because I think that's the strap line for both of them, but there was no overlap at all in what we went to. And so we're having completely different conversations, and this even when we talk within our reasonably social often pub based meetup group. We use different language. So there is a real difficulty here
I also wonder if organizations are guilty in the way that they're not necessarily interested in the learning. They're interested in the outcomes. But I don't think they even really think about how important it is that we understand how people who learn effectively and efficiently to get to those outcomes as fast and effectively as possible.
Well, there's a big push now that you don't need to learn at all, or learning and development shouldn't be concerned in anyone learning anything. We should be focused on performance not people
Yeah. Just on performance. So if even in L&D we start to promote that perspective then whose going to invest in collaboration between academic and workplace to figure out how people learn best.
Sure. So parking academic evidence to one side for a minute, where else should we be looking for evidence to inform our practice
Apart from science?
Oh, I'd say learn and research, talk to people, observe people, try to see how they do their jobs. Get insight in the processes they're dealing with the systems they're working with. All that stuff. Systems themselves like data from systems. I am guilty that I've never done that. Not because I don't want to it's just so complicated where I work, so many different systems, so many different types of data I don't even know where to start. But I think it is important to leverage the stuff that's just there used in the workflow
Yeah. I think the important thing about that evidence is how rigorously you go about collecting and analyzing it. It's all too easy to go in with a set of assumptions, and look for data to back yourself up rather than create a set of hypotheses and find the data to falsify that or otherwise. And that's the approach that taking from that learning science research that's out there and applying the same kind of rigor. It doesn't have to be exactly the same, but at least adopt a lot of the approaches there to the data that you collect internally
Or at least acknowledge that if you're not able to use the same rigor that you aren't instead of presenting it as factual. And then everybody's going to jump on it, and do exactly the same because it's also a brilliant. I think we need to be really careful with that as well. I always sound so negative
It's okay. I'm about to be negative too.
Oh, okay. We just need to be realistic. I'm a realist. Sorry
I understand it's important to base what you're doing on your experience and your own research, but you can see how that very quickly descends into problems and you list a lot of those problems in the book. We'll probably hear them all the time. So it will be things like, ""Was this course at success? Let's ask everyone how much they learned from it. Oh, everyone says that they learned loads. Well, we've had a massive success."" And so how do you counter that kind of attitude? And maybe what's the problem with that attitude as well
Well, I think the biggest problem with the attitude is that you're pretending that you've measured something that you haven't measured. I think it's okay to say, ""We want to know how people feel, and how they've experienced this."" I think that's totally fine in itself. You want to know how people experienced something that you've designed, but it doesn't mean you
If they hated it, maybe that's something to take on board
Yeah. Or then at least you have a pointer to figure out why, and then you can make an informed decision around, ""Is it okay that they hate it because it's just difficult and they're not used to doing something difficult."" I don't know. I'm just making this up. But if you are able to explore the why then I think all that's fine. It's just that you haven't measured learning. So I think that's the biggest problem. What was your second part of the question
I think another problem is if you're not taking an unbiased, look at it. If you're not figuring out whether the thing actually worked or not. But you're just looking at how do I prove that I've been successful, then you never get better. You never actually improve at both your own practice, and the profession as a whole. And I think that's one of the things that limits progress is not wanting, or being able to take a genuinely critical look at something that you've done. If you've never conducted an evaluation, or looked at something that you've implemented and come to the conclusion that wasn't utter waste of time, or it actually had a negative impact on what it was that we were trying to change. If you've never come to that conclusion, then you've probably either not taken any risks whatsoever, or alternatively, you're not measuring right
Sure. And it's easy to pitch it as if we do something, and then it's either a success or it's not as if that's a binary thing. Rather than it was maybe successful against some mergers, less successful against others. And even the ones where it was successful. Maybe we could learn from that and do better next time rather than give ourselves a pat on the back
I think that's an excellent point. Like the distinction between measuring for success if it was successful versus measuring for improvement. I think that whole mindset is completely, can't believe I'm using the word mindset, but I think it is a completely different way of thinking about your approach
I don't know there's a book in mindset.
That's part of the problem.
It's such a shame she's become such a target for jokes, Carol Dweck
It is a shame, yes. Because I think she's done great work
Yeah, yeah, yeah, yeah. A victim of being too successful
No, I just think, the work she's done is great. It's just what happened with it afterwards and, again, all the practical implications that people took out of it that's where the problem lies. There was a joke on Twitter. I thought that was funny where they said the magic variable to prove that growth mindset works is Carol Dweck, which is of course
And there we go again. It's very difficult I think for practitioners to be able to sift what's high quality, and can be relied on. And what's high profile, and has a lot written about it. Mindset is a perfect example of something where the claims appear to be vastly overblown, but it got jumped on relatively quickly. There wasn't an awful lot of replication and followup research at the point where it started to buzz up in the popular mindset. How can learning practitioners cut through that other than just by reading your excellent book lies
Well, that's the first step. Of course
Yes. Of course that is.
Well, I think it's going to take a lot of effort. And I don't know if everybody's willing to invest that because for some people, their job is their job and they're not necessarily passionate or sufficiently passionate to spend free time as well, because it takes effort. You do need to sift through, and you do need to read or watch with a critical eye. I think what we use in a book like the steps that [Willingham 00:29:27] uses, I think those are fairly easy to get familiar with and to start using. And I don't think it's a huge time investment to read or watch through that lens. I think that could be a good starting point to just start reading with a different lens, and don't jump on things because it sounds attractive, or sexy, or whatever
Right. So we are almost out of time. And so I think what I will do is I'll put some show notes to, first of all, where you can buy Mirjam's book, which is a on February 3rd
Yep. So very soon also the Willingham research that Mirjam just mentioned, I think it'd be useful to have that in the show notes. And also I'm conscious that that whole Carol Dweck growth mindset conversation, we just had had a huge amount of a soup knowledge for our audience. So if anyone was completely lost by what we were talking about, and why we were smirking slightly cruelly, then I'll put some crib notes in the show notes as well. You think not Owen, you think we weren't cruel there
No, I don't think so
No, because I literally don't think this is about Carol Dweck. I think again, I just want to repeat it. I think she's done really good work. She's a great scientist, I think
Yes, we have covered it in What I learned this week in a number of episodes. And there is a good summary article, I think, that's we can post it in there. It kind of covers the where are we just now with all the growth mindset stuff
Yeah. So find that in the show notes in whatever mechanism you're using to listen to this podcast. All right, let's move on to a regular feature of What I learned this week. Where we share something, we picked up over the past seven days. Owen do you want to go for it
Sure. So for reasons that I won't go into I've needed to dig into the history of one of our products in the last week. And specifically looking at the various design iterations of our toolkit product over the last 15 years or so. So there's been a bunch of us who've worked at a Good Practice for quite some time, going through all of our emails and files, looking for screen grabs of various designs that have been used over the years. And what's been amazing to me are the number of designs that all of us have completely forgotten about that we've pulled at a screenshot and thought, ""Oh yeah, I forgot about that version."
So when we first went off, we thought there was like four big design changes over that period of time. And then we've gone back and looked, and realized that there was about seven in actual fact. And so it just kind of reminded me of the fragility of the human mind. How you quite often reconstruct your memories on the fly purely based on what you want to remember rather than what you actually do remember. Because one of those designs in particular, and the reason it didn't last very long was because it could have been bare
Yeah. So I've been here for seven years and I think I saw the first design of the toolkit. Looked very much like Windows Explorer.
Yes, that's right. Yeah, yeah, yeah. But that wasn't seven years ago. That was
Oh yeah, sure. That was the mid '90s, or something. Early 2000s
Early 2000s. Yes. Yes. It wouldn't win any design awards now. Let's put it that way.
Mirjam, what have you learned this week
Well, so mine is very trivial. I'm afraid. I learned that cats love bleach. And I am actually, I don't understand I didn't know this because I've had cats all my life, but the reason is my cats decided to pee on my children's Lego. And we have lots of it. So I had to clean all the Legos, which took me about two hours. So I decided to use bleach cause I really wants to clean it thoroughly. And then after that I spread it all out on towels on the floor so they could dry. And then suddenly my cat started to behave like really weirdly. They started to roll around in the Lego, and started to chew on the Lego, and they were just all over it. So then I did a Google search and found out that cats love bleach and chlorine, and it's of course really poisonous
That sounds like an evolutionary misstep.
Yes. Well that happens sometimes I will improve Owen.
No, I mean for cat's, the cat's
Oh the evolutionary, I thought for me
No, I'm just thinking the cat's, why would cat's like something that is so deadly to them. Oh, that's fascinating. I wonder how that came about
That is a good point. Yes.
I really enjoyed Mirjam's assumption. You were criticizing the way she'd evolved.
I take everything personally.
Well, I spent the past few weeks reading, watching films and so on because it was Christmas, and the holidays and stuff. And so I have some recommendations. So first I've been reading Stephen King's novel 11/22/63. I got this for Christmas. And the thing is enormous it's so good I've almost finished it already. It's about a school teacher who goes back in time to prevent the assassination of JFK. Sounds crazy, but King is a genius at making this stuff sound probable, so top recommendation there
Second, I went to see the film Little Women in the cinema directed by Greta Gerwig. It was one of the most age diverse audiences I've ever watched a film with. And one of the least gender diverse. I think I was one of three men there, and the screening was sold out. It was a huge screen. I thought no one would be there. It was packed. Loved it, highly recommend. And worry that men might be missing out on it. So men, go and see Little Women. And finally I listened to the nine episode podcast series, Dolly Parton's America. And I am now convinced that you either love Dolly Parton, or you have not yet listened to Dolly Parton's America
She's on all of the usual podcast feeds and as such, it's totally free. She comes across incredibly well throughout, and there's lots of great stories in there. It's brilliant. So links to all of those in the show notes as well. And that's it, that's all from us. If you'd like to get in touch with us about anything we've said on the show, you can tweet me @RossGarnerGP, you can tweet Owen
And you can tweet Mirjam.
mirjamn. But it spells M-I-R-J-A-M-N.
And I'll put links to that in the show notes as well, along with links to Mirjam's blog and also her book, which is available from Kogan page and Amazon, I assume
You can also tweet @GoodPractice or @GoodPracticeAUS. You can find out more about Good Practice at goodpractice.com. Where you'll find our back catalog of podcasts, and details of our award winning performance support tool kit. Now in a much more effective iteration than it was back in the early 2000s. And finally, if you want to make someone's 2020 extra special, why not tell them about this show. You can leave a review on iTunes, share an insight at a meeting, or steal your spouse's phone and subscribe them on the sly. They will be thrilled that you did. Next week we're chatting to Ed Monk from The Learning and Performance Institute until then. Thank you Mirjam
Thanks very much guys.
Bye for now.