Updated: Jun 18
Disclamer: this and the following posts are only about the release test related OOTs, the stability OOT-limits are not considered.
I started to write a post about how to establish OOT-limits, but it became rather long, so I intend to publish it in more parts. But let me share the first post:
There is a huge literature available on how to investigate OOT results. But how to establish a good OOT limit structure? That can be confusing sometimes.
There is the good old WECO-model for the dynamic signalling system - again, with a lot of knowledge shared on the internet. I personnally find this model oversensitive in some pharma attributes, like yield, impurity level or assay.
So let's start to think about the static model: the lifecycle of it can be led by the Product Quality Review document, where we need to evaluate a lot of QC results anyway. So while we evaluate last year's trend, we can also establish next year's limit.
But what kind of thumbrules could we use for this task?
Where NOT to detemine OOT-limit
1. Using ICH Q2
To ease our own task, let's quickly exclude some quality attributes from the scope.
ICH Q2 (even the current, effective version) contains a great table about the different types of analytical methods: identity, quantitative and limit testing for impurities, and assay.
Regarding identity tests, even if we prove the conformity of a product through a numerical result (like polarimetry or pH), the answer is going to be a Yes or a No. (Let's not forget tha in case of OOS the actual numerical result can get us some hint about the root cause, but let's not go down that rabbit's hole for now.) So just eliminate all identity tests from the scope.
The same applies for the limit tests: you cannot trend the Yes/No answers.
Let me quickly add a little spolier here: our tool to establish OOT-limits will be the +/-3sigma approach.
And let me share maybe the most crucial part to having responsive system: we can use the +-/3sigma approach for the attributes that have normal distribution.
Two quick thoughts here: first, don't be misled of being able to calculate standard deviations from any set of numbers, no Excel, Minitab or any other tools will stop you from doing that. So you need to be careful which quality attributes you evaluate.
Second: there are numerous statistical tests (a bunch of the also automatic, that's really tempting!) to evaluate if your data set follows normal distribution or not. But don't care about those: you as QC need to own the theoretical knowledge, which analytical method generates result that has normal distribution.
Let's have some examples: an organic impurity, the water content, or LC-assay 'behaves' in normal distribution, whatever your statistical tests say about that. On the other hand, the pH by definition cannot follow normal distribution, since it has a logarythmic scale. The same applies to particle distribution, whatever exotic dimension you use to give the results.
A quite common specification element is to define a limit for a group of impurities or components. Again, our job is to stop ourselves from defining OOT-limits for these quality attributes, because although the individual impurities may follow normal bistribution, but their aggregate will most like not.
In the second part, we will discuss what to do with the remainder quality attributes.