► In today’s episode:
- The current state of mental health care, highlighting issues of accessibility
- Opportunities for AI to enhance mental health services and improve overall wellness outcomes.
- Mental health apps revolutionizing accessibility and personalized treatment options for individuals.
- Significance of movements that are paving the way for data-driven mental health care, challenging traditional methods of diagnosis.
- Nuances and complexities of mental health diagnostics.
- The fluid nature of our understanding and qualification of mental disorders.
Transcription
Mark: So I think actually the distance and kind of the anonymity that the digital space can make you feel you have can sometimes even ease the process of finding help or opening up about things. I really do believe that generative AI will be able to take that role and help people go through it and basically be kind of a healing mechanism. If it makes sense to use new technology, you always have to look at the status quo and the actual quality of treatment currently, and it's very, very low. And at the same time, probably in 20 years, ADHD as such will not be a thing anymore.
Łukasz: Today I am pleased to welcome Mark Goering, a seasoned entrepreneur and expert in mental health technologies. Mark has co-founded MoodPath, a widely acclaimed mental health application with over 4 million downloads. Following the acquisition of MoodPath by Schoen Clinic Group in 2019, Mark played a key role in integrating digital mental health solutions into clinical settings. Recently, Mark transitioned from the role of Chief Product Officer at MindDoc to refocus on new ventures. We're super happy to have him as a guest on our show. Mark welcome.
M: Good to be here.
Ł: Awesome. Given your background in psychotherapy, I'm super curious how someone like yourself, who is a psychotherapist, certified, and everything, becomes a product person for so many startups in their career. Can you tell us a little bit more about this?
M: Sure, both fields were always present in my life and my ambitions. So I had a double focus in university time, both on economic and organizational psychology and the classic clinical psychology part, and I actually felt too junior, too young to jump right as a 24-year-old into clinical psychology. So the first couple of years of my career, I focused completely on the startup world. I worked on recruiting and HR topics for a fast-growing company builder, and then I focused for a couple of years on the clinical part growing company builder and then I focused for a couple of years on the clinical part, and then I kind of brought both pathways together by founding a mental health startup. So then I was able to work in product and in the whole startup world while still being focused on the mental health part.
Ł: Awesome, and through your journey, you built so many different digital tools. I'll be curious to find out how you feel they transform access to mental health and support people, and particularly in terms of scalability and personalization, they still do it far too little.
M: By far. The potential isn't being, isn't used, or harnessed the way it should be or could be. But I think the technology, and at least the first product I built, MoodPath, has a small contribution to it: it gives access to the whole question of what my current state is labeled, or what a psychologist would say about it.
This is a question that affects so many people because, as mental health problems of any sort, this huge problem for a massive amount of people, anywhere between, let's say, 20 and even up to 50% of people go through some kind of mental health condition throughout their life, and some kind of form of pre-diagnosis and information, just understanding what is considered based on academic research and literature, what is considered an illness and what is considered normal human experience, answering a bunch of questions and letting those answers run through, what academia and science has developed as what is normal and what is beyond normal, and I think that that's a huge potential.
And beyond that—and I don't think it's being used yet—there is a huge potential to use what is produced from that and basically feed it into the standardized processes of health care.
Ł: I got it, but I don't see a massive adoption of technologies like this right now. What would you say are the main concerns for massive adoption currently, in the EU or worldwide?
M: I would say on the user side, there is at least proven interest in massive adoption. So there's a bunch of tools, there's a bunch of self-help and if you just look at the keyword, the search volume online, google and so on, there is massive adoption of using digital tools to gain information about mental health and a lot of times they then land in one of the massive amounts of apps out there, be it clinical, actual, focused apps that are probably still underrepresented or the more, let's say, in between apps, and more meditation or other self-help areas. So I think there is a lot of adoption in terms of people that are affected, but I think has almost close to zero adoption is kind of what I was saying in the last part, as well as the intertwining or actually using the information gathered for real-life health care. So if you use, let's say, you use an app, or you find some kind of website that gives you information, and you can actually enter information, and then that information is run through and gives you some kind of result, some kind of indication. I don't know, basically, of any system yet that will actually use that information. So you know, generate a profile and then actually tell you some kind of certified and recognized by public health care and insurance systems pathway to okay, based on your answers, now you can go to this doctor or this clinic and they'll be able to see your results and then, based on that, you will receive some kind of treatment. This, this kind of journey, is what is actually the missing part from, from my perspective.
Ł: This sounds amazing on the one hand but a little bit scary on the privacy end. Is this a concern for scalability or legalization of this and widespread adoption, or is it just my concern?
M: I don't really think it is an objective reason that should limit scalability. I mean, it's obviously a risk, and it needs to be addressed. There needs to be state-of-the-art best possible IT security data privacy measures in place. As with almost any any field that handles sensitive data. I don't see any objective reason why it should at least per se limit the scalability of it because there are solutions to it. With modern encryption there are from my perspective at least, we have all the tools there, too, and also with GDPR, the legal basis to actually enforce high standards in IT security and data privacy.
Ł: But I even mean from a perspective of like a feel of an end customer or user or a patient because we're banking online today. It's pretty normal, I'd say, in the Western world, and we goof around on social media. But this is different, right? The narrative of like opening up and confessing and really deep diving into your deepest thoughts and past and everything with someone online or sharing with some sort of anonymous system where it's just a bunch of questions and answers. Is that not a natural barrier to interacting and adopting again? Yeah, I mean, it's just a bunch of questions and answers. Is that not a natural barrier to interacting and adopting again?
M: Yeah, I mean it's interesting you say so. I mean I'm sure it is some form of barrier, but in my experience in building systems, I have not seen it being a significant barrier for people that actually need help, a significant barrier for people that actually need help. I think as soon as your level of pressure, there's at least in German we say like the pressure of pain or so, to say like how much you're actually suffering under your symptoms, I think you get to a pretty quick breaking point. So I think with any doctor, there is that barrier. You don't generally like it; it's not a pleasurable experience to open up and share information and there's, especially then in the digital world, a bit of anxiety around it. But in my experience, at least, as soon as the suffering, the pressure of suffering, gets to a certain point, and that's actually quite quickly then, you're actually ready in the state of mind to seek help. I don't think that the aspect of it being digital or non-digital is actually much of a game changer—actually, quite the opposite. I once read a study on how people affected by post-traumatic stress symptoms and disorder were more likely to open up and not keep back information within a digital setting compared to a face-to-face setting. So I think actually the distance and kind of anonymity that the digital space can at least make you feel you have can sometimes even ease the process of finding help or opening up about things.
Ł: Yeah, I find it fascinating that some very famous people would openly speak up and present themselves as vulnerable to the wider public, speaking of their experience trauma and other past experiences openly and also describing their healing process. I fully believe that this can be done. Just wondering if this is for everyone and given the current state of technology and, let's say, public awareness of how to use that technology, or even external motivational factors such as pain, as you described, where do you see the next level of innovation that comes in in this field in the coming years?
M: Yeah, I mean, as probably any tech-savvy person on this planet. I think the level of quality, that large language models, specifically chatGPT, opened up as kind of the trailblazer there. It excites me a lot, and I see massive potential in it. I think it is a true game-changer. It's a leap of innovation. I think anyone who deals with these things sees it the same way. And I do also see that oftentimes there are like kind of predecessors, almost like prophets of new innovations, where people have the same idea and first run, then fail miserably, but the same idea makes sense and is there. It's just the implementation wasn't good enough, and then with some kind of cycles of innovation, you just reach that certain level. And I think AI and specifically generative language models like AI using semantics that you can actually talk to in a chat situation has that same potential. And this goes back to the 60s.
I think there was a research project called ELIZA. That's always used in scientific literature, and there are many, many cases of this idea of, hey, couldn't we mimic what happens in psychotherapy, where there's a person affected and a person that has problems that talk, and then there's another person that listens and asks questions and tries to lead by kind of softly, kind of helping a person realize themselves and just giving a forum and allowing certain emotions to be there. And couldn't we automate that process? We know that a lot of these, or we're pretty sure, at least based on science, that a lot of the things that contribute to psychotherapy being effective, so being more helpful than not doing anything, or being also more helpful than taking certain drugs are very common factors or they're more general factors. It's never a very specific technique.
As you see, there are many different schools of psychotherapy. We can be sure that when you abstract to what is actually healing, what is a mechanism, what happens in the brain that actually leads to someone actually being better than before, it's not a certain very narrow therapy of systemic or psychodynamic or cognitive and whatever. It's more general factors. One of the general factors, I believe for sure, is confrontation. Confrontation and getting your brain used to certain thoughts, emotions and states of mind that you usually find aversive and want to push away, and if you can, through that process of psychotherapy, get your brain used to it and kind of reframe or see it in a different way or just at least be able to deal with some things. Then, most likely, those subversive thoughts and so on will be less powerful will have less power over you, and I really do believe that generative AI will be able to take that role and help people go through and basically be kind of a healing mechanism in of itself, a specific trained language model.
Ł: Just to reframe how I understood it. So you're saying there's a specific model you train for the purpose of psychotherapy and would that be supervised session by session? Or, like, I guess I'm trying to ask if there are some edge cases in which it could worsen the state of the of the patient. Hallucination problem or something.
M: Well, to be honest, I went through the entire training to be a psychotherapist, and some of the supervisors I had there, I definitely would not have gone to as a patient, and it's also clear from the research that, in a relatively large amount, psychotherapy can also have negative effects, and symptoms can get worse, and oftentimes, as I was describing, the mechanism of confrontation. Oftentimes, for a patient, it's almost like a dramatic curve where first psychotherapy actually things get worse for you. It feels worse because you're confronting. It's work, it's hard, it's not pleasant by any means, but, as oftentimes in life, things need to get a little worse to get better.
I do think obviously there are risks of that process stopping too early, not working well as with any medical treatment, there's always some kind of risk of things getting worse, adverse effects, and these risks are there that's why obviously it needs research, it needs certification and the regulatory bodies that actually certify medical software need to find somehow solutions to be able to address and assess risks based on AI. It's completely different assessing risks from an ever-changing, updating AI model. That also isn't even really foreseeable for the makers of the model compared to a traditional software system, but these things are being addressed, and they're working on them, I think the FDA, and then the States, is much further again, as many other States, and already has some frameworks to assess things based on AI. So, yeah, it's complex, but in the overall picture, I am most hopeful in terms of the new technological developments of AI.
Ł: Would you agree? It's comparable to the following metaphor where it's scary, it's scary that the car can drive itself, but in practice, we know that, statistically, chances for accident by human drivers are 10 times higher than if the computer does it right to be an accident caused by AI than it is by another human being.
M: Absolutely, I see it the same way. I mean, I think a lot of times, sometimes people are or think that new solutions should be somehow perfect. And they got used to the fact that there are outstanding risks of the solutions that they're using right now. They are just not aware of them because they're rare, obviously, and they're managed. But if you look at reality, there's never perfect solutions out there and the assessment of a value is always - what new advancements and effectiveness can it bring, but also what the risks are, and then it's always something that you have to weigh against each other. Yeah, so I think it's comparable with self-driving car technologies and almost any new technology.
Ł: I consider doctors as people of trust and authority. I just always assume it's going to be a 100% success rate towards whatever I'm feeling or whatever, you know, skin is rashed - I go to dermatologists and they always resolve it for me, right, 100 success rate. But as a reminder, it's not like that. You said yourself that not every practitioner in any given area might be best for even not in the area, but the specific patient type or specific condition type.
M: Just look at psychotherapy, are the most recent studies um on the effectiveness of psychopharmacology, especially specifically antidepressants, the fact of the matter is that the scientific community is pretty sure, at least they could say they are not sure, if not skeptic if SSRIs, the main class of antidepressants, have any effect beyond placebo. And it is the main tool that psychiatrists use and clinical treatment of it's the main, it's like 80% of treatments for psychotherapy are purely based on, are only doctors who don't engage in any form of psychotherapy but only prescribe antidepressants. And at the same time, science shows us that we have no idea if there's even a causal relation between the serotonin system and depression. If you just look at that one example, and there are many other examples from other medical areas as well, you know medicine does the best it can, but oftentimes it's not much better than just a placebo. Or if it makes sense to use new technology, you always have to look at the status quo and what the actual quality of treatment is currently, and it's very low. It’s very, very low.
Ł: Yeah, it's surprising how much of that is still research and learning. Just a couple of months ago, maybe last year sometime, I read that there is now new research that shows a connection between gut bacteria and depression. Don't quote me on that; I'm just rephrasing how I remember it.
M: I remember seeing some research on that in that field, and from my clinical, let's say, experience and my own personal experience, I believe that there is a huge connection between diet and gut health and mental health.
Just from personal experience, I know the situation where I lay in bed and start feeling uneasy and kind of anxious, and then I start getting into these thoughts, anxious, what I did not get finished, and what is not going to work out.
I think all of us know that state of mind, and I figured out over the last years that oftentimes, when I am in that state, especially when I want to fall asleep, it's often connected to somehow something that I shouldn't have eaten. Let's say, where I have my gut, I can't eat, let's say, milk and dairy products that well, I'm a bit lactose intolerant, and I have allergies against some pollen. But there are these connections at a certain time of the year [Cross allergies reaction chart], and I've personally noticed very strongly that um is strongly linked um to each other and that oftentimes your brain kind of starts um cognitively trying to make sense of things based on a state of mind you're in. That often may be found in something completely different than you think. I don't have much more than anecdotal experience on that, but I do think that mental health is such a complex topic, and I think it's just fair to at least acknowledge that it is very, very poorly understood so far. It's more of a phenomenon that we're trying to grasp than something that we fully understand, and we just need to do the right things.
Ł: Given that input, given the idea that this is still an area of rapid development and R&D by scientific and medical communities, do you think we could actually deploy ready-trained models? Because they would be. If we have wrong assumptions? Now, right, you mentioned this placebo effect of some drugs. What is stopping us from deploying a model, or anyone, for whatever reasons? That would just say to everyone, take this pill, and you're going to feel better, maybe even with some level of probability being right if it's winter and it's just, you know, prescribing vitamin D
M: Well, first of all, I would I'm not sure if we're actually in really in a state or phase of rapid development or knowledge improvement. In terms of biology and understanding the brain, for sure, I mean they're massive and quick understanding of some kind of basic molecular and whatever biological processes and being able to understand things that are happening. In terms of how much that is actually contributing to the understanding of mental health phenomenon. I don't think we're getting much closer there, and the entire syndrome of, let's say, a depression, to reduce it down to biological processes, is a massive gap. I mean, it's so far that if you're scratching, if you're inching your way forward by understanding the basic process in the brain, you might still be miles away from being able to put that together actually to understand and reproduce the phenomenon of human experience.
So, looking at that, I view psychotherapy and mental health, let's say, not from a scientific perspective but more from a human perspective. And I do believe that we are pretty sure, and there's good empirical evidence, that these general factors of psychotherapy, that they are actually helpful, connecting to a person, giving them the space to actually bring out and manifest emotional states and thoughts into words and atmosphere, just bringing them outside, externalizing them, that these kind of processes and confronting with them, practicing social situations within a conversation, that all these things are effective, that they're general factors of psychotherapy that make people feel better and cope better. So if you can put that into technology and if you can run that through tests and scientific studies and show that people using your product show improvements, improvements, then I'm all for that going out there. So I see, I think quite pragmatically
Ł: Totally get it. Mark, assuming that there is some condition, right, that someone has what oneself has, is it always worthwhile to notice? I mean, if someone has some sort of they're well integrated and participants of the society today, they have a family, they're, they're happy at their work, they have hobbies and everything that normally society would consider some sort of norm, but then they get diagnosed with ASD or some other form of being out of spectrum right or in well, but you know, psycho-normative or whatever the term is, the label. I'm sorry. Would you think that there is always value or there are cases where there's no value in knowing this for the individual right?
M: I would clearly say that oftentimes the labels given don't bring any value and, quite the opposite, even bring just negative effectsa lot of times. I'm quite critical of things or diagnoses that are in the personality disorder spectrum. I personally am a follower or am convinced of a kind of a scientific movement right now that is quite critical of the current classification system of mental health and has quite a data-driven approach to kind of diagnosing mental health disorders, and they basically reduced from a data perspective away a lot of these personality disorder like borderline, for example, or impulsive or narcissistic. I've seen that oftentimes, exactly these labeling effects that we discussed have come up with people based on very, how should I call it? Like sluggish or carelessly given diagnoses, especially by medical doctors who have had barely the time to actually understand the psychology of the person they're dealing with, but based on a couple of interactions in a hospital. They then label an entire person based on a certain personality disorder, and so I guess your question was more a bit in a different direction.
So I think misdiagnosis or even if they're true, so to say, oftentimes don't bring any value unless you can actually connect them with goals. And then, you know, some kind of process actually to change something within the personality of the person. But even on the other side, I think within the medical field, oftentimes, like, say, cancer diagnoses or so within the elderly people, they just don't bring any value anymore because there's no treatment, and it just has negative effects. I think there might be similar things in psychology where there's no real treatment, there's no medication that will actually address the phenomenon you're going through, and without a fitting therapy, probably a lot of people would be better off not being diagnosed at all. I mean, I've seen it many, many times, both with people affected and also the loved ones of people who feel relieved by a label. Labeling has two sides. It can be functional, it can be helpful, in German, you say, to give the child a name.
That means you could kind of feel already from the sentence it's like making it definitive. Also, there is that point kind of a connection to what we were discussing in the beginning, I think, the assumption that the doctor knows my healer, that this kind of arch type knows what he or she is doing and is an expert and can tell me exactly what it is and it has a name. It is an illusion that can be helpful. And I've actually been in a weird conflict where, even though I know that a lot of the things are much more unclear, much more complex, and not as well understood, noticing or being in a situation with a patient who needs that or feels the need for clarity I've found myself in situations where I thought myself that I'm exaggerating the level of certainty that I have in terms of a diagnosis and the prognosis of someone getting better and the chances of therapy to give the patient that good feeling of I am in professional hands, they know what they're doing.
Now I finally left this phase of being in the dark and not knowing anything, kind of like going into enlightenment, and that is a good feeling, and I think it can be used to generate energy from that and channel it kind of into the right things and motivation. So, yeah, I think all these things are positive. And at the same time, probably in 20 years, um, ADHD as such will not be a thing anymore. It'll probably be divided into four or five different things. It's always moving. So I would say, with a grain of salt, if it's helpful, then yeah, by all means, use it.
Ł: Could we get back to the topic where you were describing some of the movements and models for data-driven approaches? I'd be really curious to learn more about how they enhance mental health outcomes for patients and doctors.
M: There's an interesting, at least, connection to technology and what I was doing with my first company, MoodPath. There's one approach or model called the HITOP model. It's H-I-T-O-P, which stands for Hierarchical Taxonomy of Psychopathology, actually builds a model that's basically again like a factorial analysis model where you take a bunch of data of the occurrence of symptoms within patients. So how often do all these symptoms that we know people can have go from not being able to concentrate to not having any appetite or not being able to sleep, anything we know of? How do they on average, how do they occur together? And they, so it's basically correlations, and over masses of data they kind of then generate kind of factors of psychopathology. And if you look at, what comes out there, and you can see that beyond cultures, we see that the same models appear and over different data sets. So we're finding a robust kind of structure there.
If you compare that to what we use in reality, the International Classification of Diseases (ICD), then there's an American equivalent of it called the DSM. The way that works is that it's generally white, old men who have researched in a certain field for a very long time and somehow connected their ego and identity to a certain disease or something like that.
They have their own theories, ideas, conferences, books, and so on, and they tend to stay focused on one type of phenomenon.
Then that's what is actually leading and driving the diagnostics and treatments and it doesn't fit well together. If you're honest. A lot of things that have a hyper focus in the data-driven model are kind of clumped together to one big phenomenon and a lot of anxiety and melancholy, depression, and mood disorders. Things are actually so intertwined that, based on the data-driven approach, you can't really talk, can't really divide them up very well. So what we did was we had contact with some of the leading scientists within this consortium of scientists leading this development with our MoonPath app, and what we were doing then was like a tracking app with a long questionnaire.
Doing then was like a tracking app with a long questionnaire, and we changed our entire system to reflect the questioning structure from this high-top model so that the user could then get a profile of their own symptom occurrence based on this, let's say data-driven approach. So that was actually very fun to be part of kind of an implementation into the real world of a more scientific approach to diagnostics.
Ł: Yeah, I had the pleasure to play with this and I must say I really enjoyed the entire gamification aspect of it as well. It's interesting to see there's a lot of different vendors and ideas around this now. I believe even iOS now naturally has I don't remember what it's called, but like Health App in iOS, I believe you can fill in your diary of how you felt with different.
M: Yeah, this came new with, I think, iOS 16 or so.
Ł: Exactly. It's just brand new, like maybe a year old, maybe two, I don't know, but it's not as engaging I'm completely not motivated to fill this in, and it's not reminding me to do it, and it's not. You know, you guys did a far better job at this. So, do you feel how much we can push the band reader? Uh, from a product perspective now, user experience or communication-wise, or maybe even with the ar VR that is coming in? I don't know if that makes sense for psychology and psychiatry.
M: AR VR, I see more potential for enhancing the whole confrontational part. So, in the actual, more in the therapy, so I'll get to that point afterward. But to your original question of the tracking and self-diagnosis you could generally divide between diagnosis, self-diagnosis, recognition of disorders, and then the treatment of disorders. And within the diagnostics, I think supporting these diagnostic systems with sensor data based on your smartphone has more potential in terms of, from a technological perspective, also helping personalize and ask the right questions. So if we can see, based on the movement pattern and how often someone's actually on their phone, and what we could see based on their sleeping, from that, on a sleeping pattern, asking the right questions at the right time at the right moment in the right spot, geographically, even to understand then, okay, is basically even asking you what you're in this situation right now. We know that, like, where you're at work, you're staying longer than usual. What is the usual? What is your state of mind right now? If you feel that the diagnostic system is actually asking you questions based on prior information, maybe even on a theory with my girlfriend or my wife or whatever, and then the therapist might ask how often did that happen in the last three months or did that happen in your previous relationships as well? From that question I can understand the therapist is thinking about something and wants to kind of get to somewhere.
If the diagnostic systems that run on, software also were more intelligent like that, then I think that would improve the UX a lot and the motivation actually to use it over a longer period of time. Yeah, I think interfaces are always just having a state-of-the-art good design in terms of your interface and giving feedback, classic, kind of little elements of gamification are good, but that's just that's kind of the basics, which I would expect any professional app to have. I think on the sensor side can really be an edge. And in terms of the VR thing, I think that for diagnostics, it's not necessarily the right use case.
But, as I was saying, if you can combine that with the AI, I mean the generative AI part, the LLM, and really describe with your digital therapist the situation that makes you anxious, let's say, in a social situation, speaking in front of people, people then looking at you in a certain way and you being nervous about that. Then the AI says okay, let me draw that up and put on your Apple Vision Pro right now, and let's go through that situation right now, and then you feel it, you see it, and you can confront yourself with it, and then you can talk to the AI while doing that. That is possible technologically right now, and it's only been possible since the LLMs in my perspective. So I think there's massive potential there as well.
Ł: Pretty cool example. Or arachnophobia, right when you can just be challenged, being in a virtual environment of a jungle with all the spiders, where you can experience one by one I mean that that's already on the market.
M: There's even a digital health application that is covered by the health insurance companies that that helps you go to. You know, specific phobias, but that's really a one-size-fits-all thing, so you can. You can maybe have a hundred pieces of content. Heights and spiders and things like that. If you by any chance, have that specific phobia, then already VR that that kind of VR content sets can help you confront with that, but it's very rare. If you look at how many people have intense anxiety about something and experience that as something that they suffer of in their daily life, it's a very high amount of people. But how often is it so crystal clear? Just one thing. It's much more often it's way more complex. So clear, just one thing. It's. It's much more often it's way more complex. It's you know, uh, it's it's being rejected by your you know that specific person, because that's exactly what triggers the feeling that you had when your parents did that to you, and it's, it's, it's very individual. So if you want to be able to confront, uh, it has to be able to produce content on that level and you know, looking at Sora, you know, coming out and probably it's going to be just as mind-blowing as the Chachapiti or Dali, you know, and if you kind of think another five years ahead of being able to then produce that kind of VR content based on language input, that's surely something that's going to be very interesting.
Ł: Indeed, yeah, I'm smiling. You can see, I'm smiling because I can correlate to so many of these cases that you just mentioned.
But I also had a thought like it could be used in reverse right. Imagine you're using your VR headset and then someone hacks it to just, you know, launch a bunch of spiders, knowing that this is a weakness. Launch a bunch of virtual spiders running, you know, right in front of your eyes. On a more serious note, my last question. You know you mentioned insurance companies, and I know every country and maybe even continent is doing it differently. I can probably speak more to a US market that I know a little bit and the Polish market that I know a little bit. But how do you think, how open are the institutions that support us, like insurance companies, to experiment with this kind of new solutions? Or do they consider it, you know, not proven enough to reimburse the users for their spend?
M: Well, I think on the status quo, they would would consider it not mature enough and I would agree. But, um to your general question, I've experienced um insurance companies um so, professionally, I've had to do primarily with german companies, but I've also had contact to US and UK insurance companies in that context and I do experience them as very open and keen to adopt, and that only makes sense. It's very plausible because obviously, as payers and reimbursers, they're looking to save a buck. They're looking to save a buck, they want to make sure that they can provide services at low costs and that's obviously what technology is great at if it can reach that level of effectiveness and adoption by patients. So I think all of these scenarios that we were talking about you know, using using AI, improving diagnostics, opening it up for self-diagnosis based on on your own devices that you have around I think insurance companies are generally very open to these solutions.
Ł: To follow up on this, do you feel that the public sector or governments in general should get involved and remove some roadblocks for us technologies? I mean, should they implement and deploy these solutions in the wild, in the general public?
M: Technologies or regulations?
Ł: They should remove regulations, they should make it easier right to experiment, because right now the I feel like regulations are there's too much regulations to help startups grow in the health tech industry and particularly I feel that way, I agree okay, tell me, yeah, I, I feel like I, that I don't necessarily agree with that.
M: I think you know, having built a mental health startup actually two I never felt somehow limited or even disproportionately burdened by regulation. If I'm honest, I feel like it, yeah, I think, like the big two frameworks in Europe. You know, first of all, it's already pretty impressive that the entire European Union runs on the same frameworks, right, you don't have to go into any different countries and you know, if you follow the medical device regulation, the MDR and GDPR, you're safe for the entire European Union markets. Obviously, uk now is a different game a little bit, but they're very similar. And GDPR, you're safe for the entire European Union markets. Obviously, uk now is a different game a little bit, but they're very similar. So that by itself is already pretty impressive. And then also, I find them both very comprehensive and they just make sense.
So I find all the details on risk management, quality control, documentation that is part of the medical device regulation actually almost helpful. Or I would say it is clearly helpful in developing, because if you sit down and you just want to create a software for the medical space, obviously most of these questions you'll be worried about yourself. You want some kind of quality control. You want some kind of quality control. You want some kind of risk assessment and management. But how would you do that? Everyone can just come up with their own things.
But obviously lots of people and mostly smart people sat down and put it into big frameworks and it's almost like them giving you a checklist that you can just check off and tick off and use as a guideline and then be pretty sure that you're going to be on the safe side. And if you get it audited and you know, even looking at it from a financial perspective, you know, I think roughly maybe 50K um for for regulation, when you bring a new product to to market, a software product, medical device, um risk class 2 to a um. I've I've gone through this once or twice. I would say, as an overhead, as additional costs for, you know, personnel and and um consultants and so on, I would say um, maybe 50k and I don't think that's crazy in terms of financial planning. That's you know.
Ł: I agree. I just wonder where is the line between deployment and implementation of an actual healing software? Let's say that's intention is already to. You know, offer some benefits to customer based patients versus some space for research where we don't know where we're going with you know. Offer some benefits to customer-based patients versus some space for research where we don't know where we're going with this yet.
And my personal experience has been that, yes, it's, maybe some price tag is fixed, you know, for implementing the certification right and the paperwork. But then actually because of the slowness of local authorities, so to say, say, and depending on the market deployment, it may happen multiple times because even if you has one standard, you may. I had to get a approval from a local co-running authority, so to say, because of the delay, an actual implementation cost is extended by your operational cost of your business during the time when it didn't happen just yet right, so you can't deploy and operate and basically earn during that audit process. Maybe we don't need so much documentation up front, you know, maybe we can. We're more in an rnd mode and yeah, we're trying to move the boundary versus, because would you agree that sometimes these frameworks are also outdated for what we need to prepare and they don't allow us to go beyond what's already established in terms of R&D and product.
M: I have experienced that phenomena, but not so much actually in the context of creating medical software. I think other fields like tax law or general business law are often behind, more behind than what I've seen, at least in the medical device regulation, which, also looking at the frameworks, the GDPR I don't know, it was like 2008 or something like that, so it's relatively new and the MDR is basically just completely fresh.
It just came out last year or something. I mean been in the making for a longer time, so I feel both are actually quite refreshing and new and also kind of adequate for the time. I think what you were just mentioning just in terms of the operational overhead. I think what I've seen as much more difficult is working with academia and getting research, because obviously when you're doing a medical device, medical software, you need device medical software, you need clinical evaluation, you need independent evaluations, and that I have experienced to be a lot more challenging than the, the pure regulatory side of actually getting something audited and and through the, through the certification process, to be able to bring it onto the market.
That actually always was, in my experience personally, quite smooth, whereas in the university context I have experienced a lot more of the how should I call it? Like, I'll just say it, like laziness of you know I have a safe job, I'm, you know, paid by the state. No one can. You know I'm independent, no one can put any pressure on me, and that resulting into professors taking years and years, at least in the social sciences, to just get anything done. So I have a much higher frustration level with that and it is intertwined. Because you need a, you need some clinical, you know studies and sometimes even classic rcts to get through the certification. So in that case, yes, but to be honest, I don't have much of a solution to to change that um to make it easier for startups, just because obviously you still need scientific research and it's not centralized. That's nothing that a state could just offer as a service for startups or something.
Ł: Awesome, mark. Thank you so much for a really really fruitful conversation. I learned so much. Really great to have you here.
M: Sure my pleasure. Thanks for having me.
Ł: Thanks, mark, for your valuable insights into the intersection of mental health and technology. Your expertise in developing digital products in this field is truly inspiring and offers hope for the future To our listeners. Stay tuned for upcoming episodes where we will explore new perspectives on the world of technology. Don't forget to subscribe so you don't miss out. Thanks for listening.