BioPharmaceutical Emerging Best Practices Association

BEBPA Blog

Stability Conference Panel Discussion

Question:

Andrew Lennard: It was more of an attempt to explain how the ICH consensus technical document arrived at this measly 1.5 times extrapolation. We ended up being restricted by a decision tree that’s in the current Q1e guideline for this.  [The Q1E document] is specific to small molecules. And if, for a small molecule, you have to store it refrigerated for some reason or other, you get penalized for stabilizing your product by putting it in a refrigerator. And that decision tree would only give you one and a half fold of the available real time data as your sort of maximum extrapolation. So when we were writing about extrapolation for stable drug substances, for biologics, we did start off with two fold extrapolation, but the small molecule people pointed out that they are restricted to one and a half fold and they weren’t very happy that biologics would get two fold. And so the regulators then jumped in and decided to restrict biologics to one and a half fold.

Attendee: We currently use the two times and I’m wondering, is the 1.5 something that’s going to be implemented and that everyone has to follow this, or is it still going to be up to whatever company to determine what they’re going to do, and would it also be just going forward, or do you then have to re-assess shelf life that you already had in progress?

Andrew Lennard: It’s important to realize that this is purely for commercial products, and as ICH is intended, and this is for simple extrapolation in the absence of any more elaborate modeling. So if you took other modeling approaches, then you have the opportunity for much more than one and a half fold. This is a sort of simple extrapolation based on what knowledge you may have for the product. Of course, you don’t have to extrapolate at all, you can restrict yourself purely to the recommended storage condition data. So in that respect, it’s sort of voluntary.

There is also not just frozen drug substance, but we did just about get the door slightly opened for other stable drug products. Now I was always thinking about lyophilized drug product, and in my experience, our lyophilized products have been very, very stable, but regulators were very reluctant to overtly allow any extrapolation for lyophilized drug product. They were okay with drug substance because they saw that as being sufficiently distant from the patient. However, it is drafted in there at the moment that you could talk to the agency if you wanted to apply an extrapolation to say a lyophilized drug product. That’s why, at the moment, it’s really specifically written for frozen drug substance.

Then you’ve got [the question of] how frozen is it? I think it was Pat’s presentation that talked about storing drug substances at minus 70, which is great. I don’t know how many companies actually do cryogenically store their drug substance.  I am more familiar with minus 30.  Here you’ve got considerations about just how stable your product is at minus 30. How distant is it from the glass transition temperature? 

Patricia Cash: I like what [Andrew] said about reminding us that the ICH Q1 applies to commercial products, and the whole 1.5 times extrapolation is really a big problem during development more than at commercial, because hopefully, by commercial, you’re not trying to get two more months stability. The whole system, at least in my experience, in the companies I’ve worked with were set up on 1, 3, 6, 12 months.. We’re doubling, we’re testing double. So as long as we can continue doing that in development, we’re okay. But if we have to start adding in a 4.5 month and start adding in extra time points there in order to get through our phase one studies, that’s where it’s going to really affect the industry.

Andrew Lennard: I just want to point out, in clinical development, the EMA allows three fold for a biologic.

Laureen Little (Moderator): Talking about the differences between the EMA in the US, there used to be a difference in how you handled that, one was a stability endpoint and the other was a retest date. Is that still different between the US and Europe? Or now are we all doing retest dates?

Andrew Lennard: Biologics, until the revision, has never allowed for retest, at least in Europe. The European old variations guideline actually ruled out retest for a biologic. That’s been revised now, and the ICH guideline is now allowing the option of having a retest date for drugs up for stable drug substance.

Laureen Little (Moderator): This was for clinical products where you have limited data. And it sounds like that’s no longer a difference.

Andrew Lennard: I’ve not heard of that. The main difference I know of is for the FDA, you don’t need to declare a shelf life. For us in Europe, that is still there.

Laureen Little (Moderator): Great, that’s what I was trying to express. And we do have another question here, about how this applies to cell therapy.

 

Question:

Attendee: My question is if we’re going to the commercial and I just learned that three PPQ lots needs to be tested for the stability. For the PPQ lots, do we still need to follow the 1.5 or two fold of the time points for cell therapy, for example, CAR-T?

Patricia Cash: Well, if you’re talking about an individualized Cell Therapy, where you’re taking it out of the patient, you really don’t have PPQ lots right?

Attendee: We do PPQ lots that will be from different individual. Is anybody in the audience more of an expert on cell therapy?

Laureen Little (Moderator): I wouldn’t say I’m a huge expert on it, but yeah, I can start some of the discussion. You know, the problem is, and this was actually one of the questions that I had, is I would call this a fresh product, so what you’re doing is getting these cells in, you’re manipulating them, and then you’re sending them right back out. And so if you think about this, it doesn’t make sense anymore to have this type of shelf life is 1.5 what you’ve actually tested. Does that mean that you don’t do your testing because, in fact, you can have manufacturing issues that make one product be there for a longer time, etc.? But we do have a tendency to ship before all of our test results are back.

Attendee: I’m not sure whether it is proper to call it a fresh product because we do cryopreserve.

Laureen Little (Moderator): I’m not weighing in on yes or no, but this is a discussion point. If you can cryopreserve your product, then that becomes a little bit of a different issue, now you actually can keep the product in storage. You’re not immediately having to test and store it at an appropriate temperature during shipping. I do think you are going to need to have some shelf life. I throw this back to the subject matter experts, Pat, with that bit of that clarification. What do you think?

Patricia Cash: Yeah, I agree with you. I mean, if it’s now a cryopreserved product you’re storing. Basically, anytime you store something and then you want to use it, you have to show that it’s still good.

Laureen Little (Moderator): So Julie, was there any discussion in USP about cell therapies?

Julie Zhang: We do have an expert committee for cell gene therapy in this cycle of USP chapter revisions.  This document is from the prior revision cycle.  For stability study in this area, I don’t really know if there’s any plan at this point, but I definitely can bring this back. I know this is going to be a hot area and it is not covered in this particular chapter. I can definitely bring this back as it’s a public interest.

Andrew Lennard: What I would add is that the Q1 revision annex three on ATMP’s  has been very, very conservative, and that’s mainly because of  the FDA representatives have not been very amenable to any sort of extrapolation modeling and any sort of alternative approaches for cell and gene therapies. I mean the door’s not closed, but you have to discuss it with the FDA.

Patricia Cash: I think it makes a lot of sense, though, because the whole field is emerging.  I go back to the early 90s, late 80s, when antibodies were emerging and we didn’t know what we didn’t know. So there we’re going to be conservative.

Andrew Lennard: I can understand that for cell therapies, but when it comes to viral vectors? Unfortunately, it’s all being tarred with the same brush.

Laureen Little (Moderator): Yeah, that’s actually one of my pet peeves, that gene therapies are so very different from cell therapies. And in every way you can think of. How do you do force degradation studies? We all know, if you’ve got a cell therapy and you do something to it, the cells pop, they die, etc. And so doing forced degradation is very difficult. We have a couple, maybe two tools, and trying to get something where it only kills a small portion of the cells. Good luck, right?

 

Question:

Attendee: I’m looking for recommendations for a protein, an antibody. For example, if it’s 10 mg/ml, and now we have a new presentation, for example, one mg/ml,what is the best path forward for giving it a shelf life? We have data for just 10 mg/ml,and even the container closure is different.

Patricia Cash: Yeah, if you went from a 10 mg/ml to a one mg/ml, and you changed the container, you have to run the real time stability. And I think at that point you can use your 10 mg/ml as support to say that you know it’s going to be stable. If that’s your new GMP lot, you’re going to be in that difficult situation where you’re up to date on your stability. Basically, you’re running your stability and if you’re running a three month you’re getting six month expiration. You need to run the six months before the six months expires. You need to pull it early and test quickly. I don’t see a way around it, because you can’t rely on the shelf life for the 10 mg/ml, because it’s a different concentration. Now, if you had a 10 mg/ml and a one mg/ml formulation, and now you were developing a 3 mg/ml, you might then be able to leverage some bracketing and say, we know it’s good at 10 and we know it’s good at one. I think it’s a big enough difference, because you don’t know, for instance, in that dilution, if now it aggregates more or something, you know you’re going to have a difference in your quality attributes.

Andrew Lennard: I would definitely agree with that. But I would ask whether people think it’s reasonable to reduce the testing, though, rather than doing every stability test, could you justify those attributes that you would consider to be relevant to a concentration change?

Laureen Little (Moderator): You know, I think it’s going to be a bit product specific, because I do a lot of work in the complex world, cell therapies, proteomes and things like this. And I do know, say, in the proteome world, where we go down from a 10x to 2x type of thing, right? And that’s a pretty common change. They’re logarithmic changes that we have. I’d be loath to do less testing. Antibodies? I think I’d be more likely to do it. What do you think?

Patricia Cash: Yeah, it’s a good point. You’d have to do it kind of at a risk based approach to it. I could also see saying we’re going to extend the shelf life based on, for instance, the size exclusion, if you knew that that’s what goes out first in your 10 mg/ml. I would continue to do the testing, but maybe you can extend the shelf life based on a few key tests and just run the other tests at a more normal pace.

Attendee: So are you recommending a kind of shorter, like not a release pack, but for a stability spec to not include all the release specification?

Patricia Cash: I would go cautiously with that. I would get regulatory approval. You know how, when you write the commitment that you’re going to extend the expiration date, as long as it continues to meet criteria? You might be able to say you will be extending the expiration date based on these tests because they’re considered the most important. And then the other testing will be done subsequently, because you’re at a point where you have to to test before you can extend. So every time a stability time point comes up you’ll be in a time crunch. And if you’re working with the CMO, you may have limited flexibility with that.

Attendee: I agree. That’s a reasonable approach. Thank you so much.

 

Question:

Attendee: My question is regarding the stability comparability. What analytical or statistical tools would be most convincing when we’re trying to demonstrate the stability comparability of post approval to the regulators of the QA teams?

Patricia Cash: That’s a loaded question. We could have a whole seminar on that. To me, though, what’s important in comparability is not even necessarily the stability, but the degradation pattern. So in other words, when you think about the shaking of the box, you do your analytical methods, and you show that everything looks similar. When you shake the box, it reacts the same. Well, what about when you stress the box? Does it degrade the same? I think that’s what’s key in comparability. So I would say when you’re trying to demonstrate stability comparability, you want to demonstrate you want to degrade your product. You don’t want to hold it at your intended storage temperature. You want to use those force degradation conditions and you want to show that using process A, it degrades similarly in using process B. I’m not a statistician, so I don’t get very technical there. I just kind of, do you know the slope of the curve, type of things.

Laureen Little (Moderator): That one was scary. I don’t think any of us are jumping in to help you there, but yeah, I do think it’s a critical question. We all want to do that comparability and use our prior experience but how do we set that up? I’m curious, I know that we now have Type D meetings with the FDA where you can ask very specific analytical questions, and I have used this for potency, but I’m wondering if anyone has gone and discussed some of their plans for their stability program with the FDA at a meeting like this. Pat, have you ever seen that?

Patricia Cash: Yes. And in Europe, you have a PCMP, where you can get agreement on basically what the criteria will be to show that a change doesn’t affect your product. So it’s kind of almost like an approved comparability protocol, where if you meet these you can implement the change. 

Laureen Little (Moderator): Chana is one of our speakers, and in the upcoming day tomorrow. She is recently retired and having a great time being retired, but  Chana I see you have a comment about comparability, degradation patterns, etc.

 

Question:

Chana Fuchs: There was a discussion on the comparability, and you were talking about degradation patterns, and then for late stage and post BLA. Rates of degradation are things we look at a lot. So I think it’s not just the patterns, but to get a degradation rate. I mean, what was said is also true, that you need to get degradation. You can’t have non degradation and compare it. So I think it’s critical to address rates of degradation too and compare those. And this is what we actually, do when we review the comparison of the rates. 

Patricia Cash: So can you just use  the rate of degradation, the slope of the line?

Chana Fuchs: Basically, you can do that. We have a lot of issues sometimes on that, and you need to be careful about it, but yes, basically the slope of the line would be a good way of doing it. You need to have enough time points, like a three time point, stress or force or whatever, degradation will not really get you a really good slope. From what I hear, you also have to be very careful about putting both pre and post change into the same temperature incubator and the same area because small temperature changes can kind of change the slope of a line a lot. 

So companies who may just do a comparison to historical may find suddenly that they have a change. You also need to find the right temperature. It’s not just ICH accelerated or stressed. You need to know your product and know how stable or not stable it is, and then put that on the right degradation [conditions], so you get a slope. If you do it too fast, you can pull samples on different days, or a few days difference, and you’ll start looking a little bit different, if somebody wants to really rush it. And if you do it too slow, it takes forever. You need to know your temperature/time to get to know your product and devise something with temperature/time that will give you a slope that would be consistent and reproducible and within a timeline that you can do. And that’s why  having more time points like at least five. I know people talk about three and then more. So today we talked about it, minimum four. And I always tell people, minimum five, because for biologics, there’s a bigger range on assay results. The assays themselves sometimes have a wider range, you know, reproducibility, etc. So yeah, all that counts in but yes, the slope of degradation. 

Andrew Lennard: I just want to endorse what Chana was saying about having five data points, because that’s what my bio stats people have always told me, If you want any chance of identifying the shape of a trend, then you need at least five data points.

Laureen Little (Moderator): If you think about it, you only need two to define a line. Three only tells you that your line’s not a very good fit, but you have no idea what the shape is. We’ve all known that since about kindergarten, but I think about fourth grade, we decided to kind of realize this, so I don’t even think we even need math to realize that.

 

Question:

Andrew Lennard: I was just interested to know how USP is going to maintain alignment with the Q1 revision, because with the timelines we have at the moment for Q1, the earliest the final version will be ready is the end of next year. But personally, I would say that would be a little optimistic. With the change, we’re not anticipating anything traumatic, but you never know how things are really going to turn out in the end. But it was good to see the eventually adopted things like the short term storage condition, which is something that we got into the draft Q1 as well. And the emphasis on in use stability. So I could definitely see that there’s some synergy between the two.

Julie Zhang: You’re not the only one who asked us this. First of all, we got many comments. Over 250 comments, that will take some time to address. And secondly our expert panel will not be able to meet before December. That’s my expectation, and I did check the Q1 working group timeline. It seems like they put April next year as a sign off timeframe. If that’s the case, that’s actually good. My expectation for our chapter is about the middle of the next year that we could finish all of the comments and finalize the chapter. But if before that chapter, the Q1 guideline can be out, that would be great, then we can actually align with it. Yeah, our intention is still to align with it. It really depends on the progress on both sides. 

 

Question:

Attendee: [About doing placebo stability], the EMA guideline  states that stability studies are only required in cases where there is reason to suspect that the placebo will undergo changes in degradation or in the changes in the physical characteristics. So comparing that to say ICH, the draft guidance and USP, I wanted to see how people feel about that. For instance, do you really have to do placebo stability studies if you don’t have any reason to suspect it’s going to undergo degradation?

Patricia Cash: I have that definitive answer sitting on my desk right now from a regulator recently. Yes, you just have to do it. I mean, you have to do it for placebo. I have used that four times because it’s not an active biologic in there. But actually, when I was at MedImmune, we were, at that point, trying to match every placebo, make sure that the placebo matches exactly the buffer of your product. And we had multiple different products, so we ended up going to what we call the generic placebo, which made life so much easier because we had a placebo we used all the time that had similar components, maybe not completely matched, that way we had a shelf life already established.

Attendee: Okay, so the whole strategy, does that mimic that of the active DP? Would that be maybe three batches a similar time point.

Patricia Cash: Yeah, that’s a good point. You might be able to get by with fewer batches. Obviously you’re not really testing for much, because what goes wrong in your placebo, you know. And obviously your placebo is not going to become a final product. It’s only for development. It’s going into people so safety has to be shown.

 

Question:

Laureen Little (Moderator): So Pat, you made an interesting statement that you went to a generic placebo. What kind of hoops did you have to jump through to get that?

Patricia Cash: It’s not too much different than going with saline, like some companies go with saline. We just went with one which had our normal components, like polysorbate, our normal type of additives we had in most of our products.

 

Question:

Attendee: I was interested in [the temperature excursion USP chapter] because it states that it was recommended to incubate samples at elevated temperature early in the shelf life and then test it out to shelf life at its recommended storage conditions. And I personally just have not seen that explicitly stated anywhere. And I was just wondering if it is explicitly stated somewhere in the USP?

Julie Zhang: I think it’s only in this chapter. At least, I’m not aware of anywhere else.

Andrew Lennard: If I’m hearing this correctly, this sounds like the TGAs requirement you talk about, where you do accelerated stability first, and then put it onto the long term stability. Is that what you’re asking? which is,

Attendee: Not necessarily a full six month accelerated study, but yeah, I was looking at the USP<1049>, and I didn’t see it explicitly stated there. One states that there is a requirement to do studies to cover transportation and things like that. But the specific design of the study doesn’t seem to be stated anywhere that I’ve seen.

Andrew Lennard: It’s not in ICH. I’m hoping it will stay out of ICH, because, to my knowledge, this is a TGA requirement that we have to do because we want to go to Australia.

Attendee: But is it in the USP chapter, Julie?

Julie Zhang: Yeah, it is. We have a section called temperature cycling, while the purpose is mostly for the excursion when it happens, like when assessment excursion happens. And you’re right. I did put some examples of how you actually can design the studies.

Andrew Lennard: So Julie the USP is going to be describing the practice of doing the accelerated temperature time before moving that sample onto the long term?

Julie Zhang: Oh, it’s actually a separate thing. At the beginning, pulling several different conditions at [different] temperatures. But the time duration is not really specified, and then later on, putting the regular long term storage condition, that’s basically what this section is talking about.

Laureen Little (Moderator): This is not totally related, but Jane, you had some comment about being careful about minus 20 degrees. You want to mention that right now? 

Jane Robinson: Yes, it was just that Pat showed a minus 20 set of data. And it seems to be a temperature that’s very nice for people to pick, but it is quite dangerous, as Andrew mentioned, the glass transition temperature at minus 20 and around there quite a few liquid formulations are not truly frozen, so you’re going flipping backwards and forwards between frozen and not frozen. And in fact, it can be particularly destabilizing. So it’s a temperature that I would always keep well away from for liquid formulations.

Patricia Cash: I don’t disagree with you. 

Andrew Lennard: I agree as well, because I always have ballpark number of minus 18 as being the glass transition temperature.

Patricia Cash: Yeah, I did have a client who did a negative 20 despite my asking them not to. And then they came to me and said, Well, we got the stability samples in, and some of them weren’t frozen.

 

Question:

Laureen Little (Moderator): Yeah, that’s the life of a consultant. Our last questions of the day are near and dear to my heart, which is about these ICH time points, where we’re talking about three months and six months, or do you do more or less and what happens if you only have a few lots?

Attendee: So it’s been my experience that a lot of this material is precious, especially in the clinical phases for the DP especially. Do you ever see the agency requiring you to go beyond six months? So I see in ICH, you see a three and a six month recommendation as a minimum. But if you know you’re going to have, potentially, a product that goes outside of long term specifications that accelerated temperatures beyond six months. Is there any added value, and is there any expectation to have it time points beyond six months?

Patricia Cash: For the accelerated? Yeah, I would say, why would you expect that to happen? That it would go beyond? I mean, you’re accelerated serves one purpose, to show that you can have it at that accelerated condition for up to a certain amount of time, but you wouldn’t want it above six months. I did include in my talk, if you’re storing it, for instance, at negative 70, you might want to extend the two to eight, because that will give you some manufacturing flexibility. If you know it’s good for up to a year at two to eight, for instance.

Attendee: Sure that makes sense. So especially with a thermal labile DS, where you expect it to completely fall apart at high temps. I mean, it doesn’t make any sense to go beyond that, right? 

Patricia Cash: Right, exactly.

Attendee: Same for stress. I mean, you’re just adding more heat to the system, right? You expect it to break down more. And other than setting temperature excursion guidance and having more than one lot on stress condition, right?

Patricia Cash: Right. And knowing how it breaks down, that’s really the purpose for running those studies.

 

Question:

Attendee: I did have one more question in there, Laureen, I’ve seen, you know, in earlier phases, when you have a limited number of lots to set shelf life from, what’s the minimum number you need to be able to say, look, I’m going beyond long term stability for an extrapolation? Can you have one lot? And say, I have six months data. Is it acceptable to go to nine months now?

Patricia Cash: So it depends again, on the phase. But in phase one, for instance, I’ve done it based on one lot and gone double the two times.

Attendee: Okay, that’s what I struggle with in the last question there. I’ve seen some CMOs that have default shelf lives just to allow them to move the product within the company without it being expired, because once it’s expired, they won’t even ship it off site. But I’ve never seen guidance that supports a default.

Patricia Cash: I’ve done that before. What you do is you ship it off, and you hope by the time that you get the question, that you have the data, that you can then say, Yes, we have the data to support that. That’s sort of the developing based on prayer, which is, you know, not necessarily the best.

 

Question:

Nadine Ritter: Prior knowledge and predictive modeling presume historical stability data accurately detected physical and functional degradation. Do you think predictive stability studies should require data that the analytical methods were validated to be sensitive and specific for all potential product degradation pathways? For the last 20 years, I’ve consistently seen huge gaps in stability method validations for all types of biological products. So this is not a hypothetical issue. 

Andrew Lennard: I have made an assumption that the methods are going to be fully validated. Sorry, are you talking about analytical methods?

Nadine Ritter: I mean, if you look at ICH Q1b the photo degradation, you know, photo stability, photo confirmatory, it requires a photo degradation study to prove that the methods were capable of picking up anything that might show up under photo confirmatory studies. It’s actually the same thing for extractable leachables. We do extractable studies so that we can believe the results from leachable studies when we don’t see leachable. My question is that all this predictive modeling is presuming an unspoken assertion that the methods used back then or with the models, were able to pick up all potential degradation pathways, and my evidence in real time says that’s not true.

What I’m saying is should, should there be some caveat in the guidances, and certainly in practical advice that this is all predicated on you having first confirmed that those data sets that were predictive, and could have detected degradation, in which case, then you can use predictive stability. But if you have any doubts about the methods used to generate those model data sets or prior knowledge, then you need to think twice about the risk of predictive stability. That’s my big point.

Andrew Lennard: Yeah, this is quite good point.  Maybe this is one of the things that needs to be covered by the risk management part of the model, which the guideline hasn’t gone into great detail. It’s supposed to be more principles based. Yeah, recognizing the limitation of your analytical methods does sound like something that should be discussed.

Laureen Little (Moderator): And to say its actually required that you see a change before you could even put it into your predictive model. Isn’t that aspect of it somewhat covered?

Nadine Ritter: It’s covered in the sense that if you did thermal degradation, you could see the accelerated temperatures. But another question came up on the panel that will go into this, which is, what about the degradation pathways from chemical degradation, deamination, oxidation, which we know have biological impact in some therapeutics? I just think that there could be some more clarity or reinforcing the risk assessment part of this as being first confirmed the methods were capable or don’t believe the data that you’re using.

Marla Abodeely (Session Chair): It all comes down to that data integrity, right? When using AI and machine learning, making sure that what you put in has integrity, like you’re saying Nadine.

Nadine Ritter: What I’m saying is that this is not being emphasized. All these great discussions I think are really important. But I think that everybody needs to start by saying, of course, this presumes that you have verified, that you know the data sets are accurate and data integrity. I mean, we think about that from a documentation perspective, but it is also experimental. And given that we have two ICH guidance documents that already require this, the Q1b which is being rolled in, but it’s just for the photo degradation pathway. And you know, the new extractable leachable guidance coming out, which again, is just for extractable leachables, but the principle is the same. I once heard an FDA person say, you can’t see ghosts with a metal detector, and we have to be sure that you’re using historical data for ghosts that you had proven ghost detectors. So that was just a point. And just that, Andrew, in your position, I’d really love to see this get reinforced in the guidance.

Marla Abodeely (Session Chair): This is something where you feel like you need one of those independent initiatives that can dig into this blindly. 

Andrew Lennard: If this has been provided as a comment, we haven’t got around to these yet, but then, if it’s been provided as a comment, it will be discussed and considered.

 

Question:

Earl Zablackis: It’s been a pet peeve of mine for the last 15 years or so that most people don’t actually, in measurement analysis, include the actual uncertainty of the measurement. It’s quite common in pharma industry to throw out an RSD and assume that that’s the real uncertainty. But it’s actually not. If you look at things like the way bullets are manufactured, or other commodities, there’s a true mathematical measurement of the uncertainty associated with every measurement. So you always know that a mean data point for some value of a lot has an accepted and true uncertainty. USP just recently published a chapter on how to measure that. I’m not sure that I’ve seen anybody actually submit data like that yet? I think it’s probably still true that it doesn’t exist for very much pharmaceutical data. And so I wonder, when you have models like these, where is the uncertainty used in those to help make those predictive modeling much more accurate.

Laureen Little (Moderator): I work in the potency area where we have huge uncertainties.  I think it’s easier to really see it in that area.  I’ve noticed how the models depend greatly on those early time points where you’re getting very small changes, and those small changes actually are within the method variability and yet we have a huge reliance on that data. It’s fine because it keeps lining up and it keeps looking like it, if you’ve got a lot of data. But I think it does mean we either need to acknowledge that there’s more uncertainty in that area than what the model’s allowing for.

Andrew Lennard: I mean, I don’t know if this is different from what you’re saying, but you know, for our linear regression, you’ve used mixed modes models which are all based around variability of slope and intercept. The Bayesian analysis is only centered around those variables. So, yeah, I guess some models incorporating variability a bit more overtly than other models.

Kristinia Flavier: I will address this a little bit in my talk as well. The way the errors are generated as Andrew mentioned, you can use the Monte Carlo simulations. You can use bootstrapping. And so what you might do is incorporate what we would normally, typically call that measurement error, analytical variability. You can say, this is the error on this data point, this is our measured value. What is a reasonable range of values based on that error, and then kind of incorporate that in the model. So honestly, I think it will depend a lot on who is doing that, how they calculate error but that is one way that it is done. When you see those error bars, it is based on the error of measurement, as well as how well they fit to linear regression or whatever model you’re using.

 

Question:

Attendee: It sounds like the regulatory authorities are still not entirely sold on the concept of using these or the weight that can be put behind the kinetic modeling studies. The question I had was if there will be any specific guidance on modeling studies, things like if we have multiple temperatures, do we need to see degradation at those temperatures in order to generate the model? If so, how much, how many data points, what kind of prediction intervals to use, and then things like allowance for outliers later in the study, I wasn’t sure if those kind of specifics will be included in future guidances.

Andrew Lennard: We’re purposely not being that specific, because it would just tie people down. Early drafts may well have included some very specific practices, but we’ve then tried to make it more principle based. We don’t tie you down into specific details in terms of temperatures or or number of data points

Attendee: Even some guidance on the extent to which you can extrapolate or anything like that?

Andrew Lennard: The regulators really would like to do that, but actually are advised against doing that because it’s going to be very product specific and model specific. I’m just trying to say that the extent of the extrapolation should be driven by the data and your knowledge and not being arbitrarily prescribed as like two fold or three fold. So, yeah, I personally fought to keep that flexibility in so you can justify the extent [needed].

Attendee: Existing guidances would say in certain situations, you can’t extrapolate at all, but in other areas, you can, and things like that. I wasn’t sure, because they were being developed for the kinetic problem?

Andrew Lennard: No, because we’re saying those simple models based on very limited data whereas things like thermo kinetic modeling there’s really solid scientific backing to them, and so depending on your particular product and the data you have, you should be able to justify what you’re claiming for the extrapolation.

 

Question: 

Kim HuynhBa: Andrew, thank you very much. That’s very enlightening, you presented a lot of case studies and it’s amazing, because I don’t usually see that many case studies. And also a recap of the revisions of Q1. I’m curious,  is that in the Q1 revision, they actually mentioned an ASAP (Accelerated Stability Assessment Program). How does that approach compare with those that you mentioned in this example? And along with the previous question, are they ready? Are they acceptable and somewhat? Do you know any approval for those or any regulatory agency that is more open to discuss these models than others? Would you be able to comment on that?

Andrew Lennard: Yeah, that’s what I was trying to say closer to the end, is that it’s all very mixed at the moment. Formally, most agencies are going to tell you they want to see the real time data until the revision is implemented. Certainly [we’ve] heard that very clearly from FDA and Health Canada. But they want to see what we’re doing in this field. They want to be able to anticipate what’s going to happen when the guideline is implemented. I have seen, or I do hear, of models being accepted in post approval and clinical studies, but it’s a bit patchy, even within the same agency. I’ve heard of ASAP small molecules being approved for one product, and then somebody tells me that Health Canada didn’t accept ASAP for another product. Unfortunately, of course, this is other companies work, and I don’t know all the full details and background. There may be very good reasons for this. So, yeah, so you asked an awful lot more, didn’t you? The guideline I shouldn’t be mentioning ASAP anywhere. That’s why I was talking about thermokinetic.

Kim HuynhBa: I totally agree. It took me by surprise as well. I want to say something about the previous questions as well. We talk a lot about prior knowledge and using data to model, but that data would not work if you don’t really have a good method for it, or if you don’t have a good process to make it, then you’re not going to be able to do any kind of predictive modeling. If you don’t have enough, if you have three months of data. How much can you do with that? You can imagine what the predictive is going to be. I encourage people to look into some of the chapter that Nadine mentions about uncertainty, it is actually just a matter of accuracy and precisions, and that is actually required for your analytical methods. But the thing is, your method, you have to understand the variability of your process. You have to understand that before we can get into any kind of predictive.

 

Question:  

Laureen Little (Moderator): For Chana, you’d mentioned, I think it was your first case study, where you saw the polysorbate degradation. I thought that was really an interesting observation that it is one of the first things you look at. They did a worst case scenario and you said we could accept it based upon that. When you did that, did you ask for an HCP consistency with regard to what molecular entities were there? Not just is it below 10 pica moles but actually what is there? Because I know that sometimes, as we go batch to batch, we see different host cell proteins coming out.

Chana Fuchs: In general, we always look at consistency and consistency of manufacturing. I mean, this is a BLA. This is a validation already, right? It’s a validated process, and part of the validation is either looking at host cell protein removal to a certain range. It depends on the applicant, if they measure the host cell protein for each base, for each batch. And I think in this case, they did, but I cannot confirm that 100% but not specific to the lipases or something like that. Regardless, if they make a manufacturing change upstream or to some of the downstream, in some of the columns, whatever manufacturing change or shift, they would put a product on stability. And actually, this is also what the annual lots are about, right? Whether they catch it or not there it should be caught with the annual stability lot. We don’t just look at stability. We’re looking at all the controls in place. But nobody, as far as I know, has requested specific testing of specific host cell protein lipases to ensure that you’re getting identical amounts of the same lipases, etc, if that’s what you’re aiming for.

Andrew Lennard: This might be one of the reasons why not every company claims that they have polysorbate degradation. That plus the particular method they’re using to look at polysorbate. I have been wondering whether some products or their processes are somehow or other managing to remove lipase and other times not.

Laureen Little (Moderator): I do know that some people are trying to develop master cell banks for knockouts, where they’re knocking out some of these lipases and things so that we don’t have those as a concern so much anymore.

Chana Fuchs: From our perspective, during an IND phase, most of us ask for, and I can’t say we’re 100% consistent because FDA is big and you can’t control consistency that way, but most of us ask that polysorbates be tested on stability so we have that information. There are many products from many companies who have degradation and control. Sure, you can find it also in redacted reviews, and you can find that they use polysorbates in the package insert. You know that we’re asking this and there are ways to check some of these things, but nothing is consistent 100%. But I like the knockouts. That would be great.  

Andrew Lennard: I was just gonna say that FP has been doing a lot of work on polysorbate degradation. I don’t know whether any of their papers are published yet, but they’ve certainly been working on a couple of papers on this topic. One of the take-homes that I recall from hearing about them is that you can get up to 55% degradation of polysorbate but it really doesn’t have any impact, because you need so little to have its claimed effect on high molecular weight formation.

Chana Fuchs: And indeed, you see that lag right? You see the particles coming up. When you look at it and there’s a polysorbate resulting particles from the degradation, not the protein, but when you look at the polysorbate levels, you see a real lag before you can detect the polysorbate degradation. So I think that also you’ve got a lot going on there, from a mouse perspective.

Andrew Lennard: I think what they’re trying to say, though, is that there isn’t any real impact, or certainly when it comes to the safety of the product, because there’s so much polysorbate present.

 

Question: 

Kim HuynhBa: Chana, you mentioned a lot on photo stability, and I have a couple clients who are saying that if they use amber vials, then they don’t have to do photo stability. Is that true or is that not? 

Chana Fuchs: I’m not going to say for sure, because, surprisingly enough, I haven’t seen our products in amber vials, but as far as I know, and again, take it with a grain of salt. There is no mandated stability testing for drugs in amber vials, but it is strongly, strongly recommended for light sensitive products. I would ask a company to do it anyway. I don’t think FDA mandates this or requires it 100% but I very strongly recommend it. And I think it’s really stupid to say absolutely not. We’re not going to do it. You could do a shortened study. You could do something.

Kim HuynhBa: Yep, a very low cost single study, just to show the data that it doesn’t impact. You know, how much a big deal is it? But I don’t know what they don’t want to see. 

Chana Fuchs: But then, the impact gets mostly when you get the downstream. But we’ve got amber glass, a lot of light goes through, UV radiation, the blue light, deflects most of the wavelengths below 450.

John Campbell: I would agree with that, just if I could. Kim, I mean, I think there would be a need to demonstrate control with data, not just assume control.

Kim HuynhBa: And not only that, those data will be very helpful later on for the supply chain as well. When it gets exposed to a short term on the light, those data are so helpful. 

Laureen Little (Moderator): I go back to Pat’s comment yesterday. She’s going to get the quote of the conference. “we don’t know what we don’t know”.  If you don’t test it, then you don’t know that something’s getting through.

Andrew Lennard: To me, that is almost like fear of the unknown, and you absolutely can’t get around that, because it’s like proving a negative. And this is something I battle regulators with constantly, because they’re always saying, but what if?

Kim HuynhBa: Well, that depends on the unknown though, Andrew, I mean, it’s really dependent on the unknown and dependent on what data you have. We’re taking it to the extreme here, and that’s not true.

Marla Abodeely (Session Chair): I wonder if they must have done forced degradation studies already?

Kim HuynhBa: Yes. And also, usually light is a combination of light and heat. At least one study would help to put that at ease and also to structure your program later. So it’s not like we fear the unknown, but we want to know, and we have data to support that, rather than just a guess.

 

Question: 

Attendee: Yesterday, Patricia mentioned using generic placebo. My question is the placebo in general, especially the difference between the viscosity placebo and the drug product, especially for high concentration products. So how do you address the viscosity difference between the placebo and the drug product? Not to unblind, blinded clinical studies. Is reformulating the placebo to match the viscosity of the high concentration product common and at what phase of development is this typically done?

Patricia Cash: That’s like a whole other conference, to be honest. In the case that we had, the generic placebos were close enough. So they were all within .02 to .05% polysorbate. In our case, one of the things we really wrestled with was, how do you make it yellow? If your product is slightly yellow, do you make the placebo slightly yellow? It’s obvious to see saline, which has no viscosity, versus a high concentration yellow protein. So really, the blinding, it is worthy of its own conference. But in our case, we were lucky, because we worked within a certain range. And we did not make it yellow. We did play with that idea, but we didn’t in the end.

Attendee: Yeah, for us, for color, it’s easier to blind, because you can cover it or tape it up. 

Patricia Cash: I mean, most people in the covid vaccine trials knew whether they were getting the placebo or not because of the reactions of what they talked about. I understand what you’re saying. From our perspective, we didn’t look at trying to make it more viscous, to mimic more of the protein, we looked at trying to make it as close to the protein formulation as possible, so that any effects could be attributed to the protein and not the formulation.

Andrew Lennard: So I’ve worked on a product where we definitely did adjust for viscosity. In all other respects, the formulation was the same, but we did add a viscosity modifier to replace the protein.

 

Question: 

Andrew Lennard: I fully accept that biologics are a lot more complex than small molecules, but I do question whether we should say they were more sensitive. When I’ve not been working on stability, I’ve often been working on nitrosamines, and there I have really seen just how sensitive small molecules are, because the chemistry is the same, and for me, it all comes down to knowledge of the structure of your protein and solvent accessibility, as to assessing sensitivity or not to give an insult, so you can have a buried, susceptible group that isn’t accessible. So in that type of respect, you could say that our biologic is less sensitive than a small molecule that has a particular group solvent accessible. I really try to explain why we consider our biologics to be well characterized. And this sort of all comes back to trying to get a harmonized understanding across regulators of what a well characterized biologic is. And the purpose of that is that one of the main criteria, if you like to use alternative approaches to stability, is that it very often is expected to be for what’s called a well characterized biologic.

Kim HuynhBa: I agree to disagree, but I hear your point there, Andrew, and I hear what your goal is. I think you took my sentence there out of context. Biologics have their key and if you talk about nitrosamines, you compare one biologic’s stability with a toxic type of impurities, which I don’t think is really equivalent. I think in some cases, from what I’ve seen, biologics has quite a big range of different things, and some of them, they are, you know, environmental or those stability profiles a little bit more difficult to think about or to predict or to extrapolate, right? I hear your point, and I’ve seen a lot of discussions around it depending on what you’re working with. And in some of the things I work with, the environment is really important.

Andrew Lennard: I know some regulators are really stuck on this point, and they have a very hard time to allow predictive stability modeling as an example.

Kim HuynhBa: Probably because of their lack of confidence and lack of knowledge on those areas, rather than anything else. Or maybe as an industry, we failed to convince them that there are some cases you can’t control. Both sides have to learn a little bit more about how to work together there.

Laureen Little (Moderator): You know, we’ve been talking about this a long time. I can remember chairing a group discussion at WCBP Conference in 1997 that Bill Egan was leading, and I was co-chairing. He brought up predictive stability, and he just got roasted. He kept looking at me like, I need help over here. I wasn’t even sure what it was, because he’d gotten into the weeds about statistics. So we’ve been talking about this for a long time.

Kim HuynhBa: Well we came a long way though, Laureen. I remember I was in the pharma group on the bayesians, and we got killed with that paper. I’ve been looking at it for the past 30 years and we have come a long way.

Laureen Little (Moderator): I agree. It’s just a very slow process.

 

Question: 

Attendee: What is the difference between temperature cycling studies (besides F/T studies) and short-term storage condition studies? Can you provide examples?

Kim HuynhBa: It’s very interesting that both of these studies are actually listed in the Q1 revisions, and they are slightly different. And cycling study is pretty much to support the transportation and distribution where condition, environment, condition, temperatures, is moving back and forth. Short term storage is in the case that the material doesn’t store at the label storage condition and it moves outside of the storage condition in a short period. For example, you have to take it out and maybe store it at room temperature for a month and then that has to be studied, because it’s not part of label storage conditions.

Attendee: What about  patient self administration, they may take things out of the fridge, and then if they don’t use it, they put it back in the fridge.

Kim HuynhBa: If they are put back, then that short term has to be part of your stability study. How long can you keep it out there before you put it back, or can you just take it out? You have two years to use it, but after six months, you take it out and you keep it out there for a month. But if not at the label storage conditions, then that particular short term had to be studied.

Andrew Lennard: If they do that, and we got that all written into the Q1 guideline that the intent is that it’s just taken out and stored at room temperature. But if you did put it back, then it’s allowed, it’s in the guideline. But you have to do the study to show that that’s okay to do. One of the main points about the short term storage is that you can open the secondary packaging. One of the first times I submitted this was when you had a pack of auto injector pens, and so you had several in a box, which was stored in a refrigerator. But then you wanted to be able to take that box out and open it and use one pen, and then you could keep that open pack at room temperature until you start the second, third pen.

 

Question: 

Andrew Lennard: This was more of a comment because I, like many of us, look forward to the time where we could have a more patient centric space. 

Chat comment: “Spec and shelf-life are in balance and when we have ‘patient-centric’ specs the applicant has room to select if they want tight spec and long shelf-life or short shelf-life and wide spec or more likely something in between.”

Perceval Sondag: I agree with that comment.

Andrew Lennard: There’s a trade off between shelf life and specification. When you have a wide enough specification that’s clinically justified, then yeah, the applicant should choose whether they want to have a reduced shelf life and more room in the specification, or having a tighter spec and longer shelf life. It’s always a balance between those two, that’s what I was commenting on.

Marla Abodeely (Session Chair): Having clinical experience really helps in specification setting and formulation development. Setting specification is probably the most challenging, I think, more now more than ever, with how fast products are going through the phases (phase2b/3) it’s hard to keep it’s hard to get that.

 

Question: 

Earl Zablackis: I just wanted to comment on my experience, anyway, there is one case where there is a clinically relevant specification for stability for the menactor vaccine. So the free polysaccharide is the degradant, and that’s what adds potency to the vaccine is the actual polysaccharide needs to be intact. So it’s one of the few cases I know where we actually have a specification based on the clinical data that supports that product.

Perceval Sondag: So what you’re saying is, I joined a great company [Sanofi] that does that already.

Earl Zablackis: Well, yeah, I’m not sure they do it everywhere but they did it once.

Perceval Sondag: I’d say in the non clinical statistics community, if I dare to call it that, we’re more and more aware of that situation. I think a lot of experienced statisticians with pharma business knowledge, let’s put it that way, when asked to derive specifications, they now have more and more tendency to requests like, is there any chance we can link this with and we can build this on clinical data and clinical outcome? And companies sometimes allow it. 

It gets touchy, because sometimes getting access to the clinical data and to the clinical outcome information, when you’re in CMC, you don’t necessarily have the clearance. So I know it’s a challenge, and we could do this, but this is going to take months and months of trying to get everything, or we could get specifications ready in two weeks of one statistical analysis. So most of the time, I believe it’s still done in what we’ll call a capability based approach, which is looking at the data at release and building specifications on tolerance intervals and stuff like that. I think it’s still the case most of the time, but not always, and I’m very happy to hear that at Sanofi it’s not always the case.

Laureen Little (Moderator): I think it’s actually more common in the vaccine arena than it is in others. Most of my clients who are working in a lot of different therapeutic areas don’t have the luxury of multiple batches going into their patient population. They’d love to have that data, they would love to take the time and the number of people and then make that many batches. But, you know, their patient population is also a lot smaller.

  

Question: 

Attendee: Percy: just to confirm, for the additional batches, are only release data sufficient?

Perceval Sondag: It depends where the uncertainty comes from. If the uncertainty comes from the batch to batch effect, then yes, it will definitely help a lot. If the uncertainty comes from somewhere else, no, release data won’t be enough, and you’ll need some more. Yes, we want more data, but we want the product to be given to patients as fast as possible. And I agree with that. So there’s ways to handle that, which is either you file with a shorter shelf life and then you update every time you have new information, which if you do the fixed effect thing you would never want to do, because it will make you at risk of lowering your shelf life. But with more information, with the mixed effect model will actually possibly improve your shelf life, if your true shelf life is good, and then the other option is to leverage prior knowledge with with the FDA being more and more okay, and the EMA, has been much more okay with that for a while now, as long as you justify your prior knowledge, and you just don’t like come up with some numbers and say, like, this is my Bayesian prior. That’s the main issue with Bayesian statistics is it’s easy to cheat, so it needs to be scrutinized. But everything we do, every data we put, is scrutinized. Suppliers are scrutinized, and then we’re good.

 

Question:

Attendee: Sometimes with some of these legacy assays, especially you know, potency, the day to day variability, is so high that the noise in it doesn’t allow you to apply statistics for extrapolation purposes within Q1E, currently. If you know it’s going to pass, because you have a lot of data for lots going through the stability program, can you still use this, these models, to extrapolate beyond what ICH limits you to?

Andrew Lennard: We routinely evaluate our potency data, but there’s usually enough of it that, even just by eyeballing it, you can see they’re not changing over time. 

Attendee: If I’ve applied some of these models to predict shelf life, and it all looks fantastic, but I have a stability method, or another method that I know will pass, but I don’t have data for in the long term. Is it too risky to make that extension beyond what’s currently allowed? Just in your opinion?  

Andrew Lennard: I take the advice of our bio statisticians.

 

Question:

Anton Stetsenko: For linear regression, you mentioned that you can transform a non linear kinetics to make it linear and use the regression. Is it driven by the product specific or is it a method driven decision to use transformation, and what kind is the most common transformation you would suggest to apply?

Andrew Lennard: In my experience, the only degradation that I’ve seen that’s been non linear is non liquid species and the examples I get to see, its simple root square of time gives you what looks like a very good linear result by normal sort of goodness of fit testing. Traditionally, we’ve always ignored that it was non linear because it plateaus so if we assume linearity, we’re getting a worst case result. That’s always been accepted by the regulators. And that’s a sort of historical methodology that I think Q1 revision is maintaining, but scientifically, it looks very untidy if you’re doing a linear regression on something that is clearly plateauing. One of the main drivers is more scientific and logical. And if it’s if it’s non linear, and you can make it transform it to linear, then surely doing that for your for any linear regression analysis is going to be a lot more satisfying and more informative.

Kristina Flavier: In the cases I’ve seen, it’s the majority of the high molecular weight species. I do the root squared. But especially among chemical degradation, at least scientifically, there are other cases where they can have the plateau behavior. Any sort of reversible reaction, right? Anything that reaches equilibrium or a steady state is going to show a slowdown. I know it’s very common for high molecular weight species, but it’s also not out of the ordinary to see it in other things as well.

 

Question:

Kim HuynhBa: Does five points mean the addition of 18 to 24 months data?

Chana Fuchs: Talking about long term storage, yes. So table one and ICH Q1 the draft is actually the one that has that three point storage in the little subsequent comments there. So we’re talking about accelerated stability, not about real time, long term storage stability. You could use a minimum of three points, including the beginning and end. And I’m saying that’s a five pointer with beginning and end, I wouldn’t use three points. 

 

Question:

Attendee: I just wanted to check if there’s ever been a situation where folks have filed with data that is less than what they are trying to claim, and what justifications have been provided for such instances?

Chana Fuchs: There are situations like that where they try. Sometimes what they call primary data may not necessarily be acceptable because it’s so different, and we find we can’t support the sufficient similarity of the manufacturing process, for example. But in general, when you submit an application, you submit with your available real time stability data, and you can request a little more, and then we can request and you submit a stability update during the review process, which kind of extends the amount of data we have. So that helps extend it a little, but I think you always have to remember that you’ll need a certain amount of data to make sure that you have what you need to actually market the product. If not that, we give you shorter expiration dating. You don’t want to give such a short expression dating, but you can’t work with it, right? But we give based on your request and then we see what we have, back and forth conversations with the company. And sometimes we just have to wait for the next time point or something like that.

 

Question:

Attendee: My understanding from Chana’s presentation yesterday is for post approval changes, say changing formulation, if comparability is demonstrated between the formulations that same shelf life, say 36, can be maybe approved if companies demonstrated with limited data for the new formulation, say, six months. Does this comparability evaluation or strategy also apply to clinical material? Or do we need real time, long term storage data?

Chana Fuchs: So we do not do that for the original BLA and the original shelf life. We do that when we have a lot of experience with the shelf life and the manufacturing and all kinds of variations, and then you come in with a supplement, because you made a change, and we always try to, for the same product, basically to give the same shelf life as the approved one. So clearly, moving a facility to a new one, as long as the manufacturing is really similar and the comparability is really good, and you did all this extensive comparability studies, we will do that because we know we have all the knowledge and the data and a lot more additional data about everything that you’ve been doing since, but we will not usually do it in the original BLA. In the original BLA, you do a comparability for that purpose, but the ultimate marketing expiration dating needs to be based on the factors that were discussed.

Patricia Cash: I add to that, though, in an initial IND submission, a lot of times your lead lot is your tox lot. So you’re, in essence, doing a comparability and showing that they’re similar, and determining based on that.

Chana Fuchs: Absolutely, that’s an initial IND. And that’s not the marketing BLA, we’re basically in an initial IND. We just need to see that there is sufficient stability data for the US. I know it’s different for the Europeans and EMA, but for the US, we need to see that there’s sufficient stability data that shows that the product is relatively stable. It’s not going to just kind of tank down very quickly or just do this, to make sure that the material for patients is going to be sufficiently stable. Even in the initial IND, it’s very hard to put together the acceptance criteria, the specification for some of that, because you don’t have the experience, and this is done based on whatever knowledge you have. You just want to make sure you have the stability, so that what patients are getting in your initial IND is of sufficient quality.

Attendee: I think where I’m getting, as I say, for phase two or phase three, there’s a formulation change, so we can’t apply the comparability either to the limited data.

Chana Fuchs: You may be able to, because there’s less that you’re carrying with you from a clinical perspective. I think ICH Q5V is really good at clarifying that as you get further and further in through development and have more and more clinical data. The clinical information, clinical data that you’re carrying along with you from your phase one is much smaller clinical but as you get on, then the comparabilities get more complicated, you need to bring in more knowledge. But even with a formulation change, you can do a comparability. You can do a change based on comparability, especially at the FDA, where we don’t really have an expiration dating on IND level, it kind of is a living thing. In your GMP environment that’s different. But the concept of an expiration dating is not necessarily there. You’re just basically having constant testing and checking you will need to put that lot on stability for the new manufacturing process, absolutely, but not provide a full amount of two years.

 

Question:

Attendee: So my question relates to page 21, of the guidelines for the EMA, which does suggest that one can utilize the accelerated stability data, like 4x of that up to a shelf life of 12 months. So let’s say you have three months of accelerated data you can give, 12 months of stability shelf life. And has anybody used that? And what has been the pushback on that?

Patricia Cash: I haven’t used 4x, I’ve only used 2x. That’s a good question, open to the group? [Silence] I guess no one has used it. 

 

Question:

Attendee: A couple of the presentations showed how adding additional data can really blow up, especially towards the end of an extrapolation of the confidence bounds. Is there a certain number of lots that when you reach, you should cut off inclusion, or a time point limit as to guidance, where you should exclude lots? Because typically, if you don’t include a lot, it looks like you’re trying to hide it, right? So we end up including a younger lot with three months stability in with other lots of 24 months, we’re trying to get to 36 months, three month lot may blow the limits up and cause us to go out early artificially, and we know it’s not real, just due to variability. Is there some kind of rule of thumb as to when you include or exclude lots by age, or just based on the number of lots you have available for setting shelf life if you’re locked into Q1E?

Kristina Flavier: I can say that I think this falls a little bit more in Percy’s area of expertise. As he said, it really depends on your approach to how you’re using those lots, and how you’re pulling those data. The approach that we’re usually looking at with the more predictive, typically we add it. In this case, adding in more lots improves your statistics with at least the temperature based approaches. If it doesn’t, then I think that’s an indication that there’s a problem in the model, or something that needs a closer look. For our approaches, honestly, we would encourage looking at all lots seeing if it matches the model predictions. Again, it depends on exactly what kind of calculations you’re doing there.

Attendee: Yeah, this is particularly geared towards lots that may have a noisy method that creates an artificial decline, instability comes right back up at the next time point, right? That’ll still blow up.

Kristina Flavier: This is a little bit more of a statistical question, but I’d say that it comes back to your predictions, really only as good as your analytical method. Unfortunately, some of it might be based on how you are calculating those confidence intervals. That it’s how you’re taking the low method precision, and how you’re taking that through into your predictions. So that’s the very short answer, unfortunately, is that’s a little bit of a complicated statistical discussion of what’s the best way to handle that.

 

Question:

Laureen Little (Moderator): I had one question for John. You’d mentioned that you pulled a lot of this information about stability from different work groups, but that you didn’t have one that was working specifically on stability, is that what you said?

John Campbell: So the survey questions came from the BioPhorum Development Group forced degradation work stream. So the people on that work stream are experts in forced degradation. And that does not necessarily mean experts on stability, but it’s stability adjacent, and we certainly have people on the work stream that do predictive stability, but we have some in the workstream that don’t do predictive stability.