- *Corresponding Author:
- D. Satyanarayana
Department of Pharmacy, Annamalai University, Annamalainagar-608 002, India
E-mail: [email protected]
|Date of Submission||23 June 2005|
|Date of Revision||24 February 2006|
|Date of Acceptance||07 September 2006|
|Indian J Pharm Sci 2006, 68 (5): 615-621|
A chemometric model for the simultaneous estimation of phenobarbitone and phenytoin sodium anticonvulsant tablets using the back-propagation neural network calibration has been presented. The use of calibration datasets constructed from the spectral data of pure components is proposed. The calibration sets were designed such that the concentrations were orthogonal and span the possible mixture space fairly evenly. Spectra of phenobarbitone and phenytoin sodium were recorded at several concentrations within their linear range and used to compute the calibration mixture between wavelengths 220 and 260 nm at an interval of 1 nm. The back-propagation neural network model was optimized using three different sets of calibration and monitoring data for the number of hidden sigmoid neurons. The calibration model was thoroughly evaluated at several concentration levels using spectra obtained for 95 synthetic binary mixtures prepared using orthogonal designs. The optimized model showed sufficient robustness even when the calibration sets were constructed from different sets of pure spectra of components. Although the components showed complete spectral overlap, the model could accurately estimate the drugs, with satisfactory precision and accuracy, in tablet dosage with no interference from excipients, as indicated by the recovery study results.
Simultaneous determination of components in a multicomponent drug formulation could be a difficult task, especially when characteristics of these components from the analytical standpoint resemble each other closely and due to the presence of other pharmaceutical excipients. In the recent past, multivariate chemometric methods for analysis of multicomponent systems have been reported in international journals [1-4], mostly due to the advent of fast and affordable computers and rapid scanning spectrophotometers controlled by computer software.
Neural networks (NNs) of appropriate architecture have the ability to approximate any function to any desired degree. However, it has been shown that the transfer function must be continuous, bounded and non-constant for a NN to approximate any function . Fundamental background information on NNs can be found elsewhere [6-10]. Pharmaceutical applications of NNs have been recently reviewed by the authors . Computationally, the NN is an approach for handling multivariate and multiresponse data and hence suitable for modelling, i.e., a search for an analytical function that will give a specified n-variable output for any m-variable input  Unlike standard modelling techniques, where the mathematical function is required to be known in advance, NN models do not require knowledge of the mathematical function in advance and are called ‘soft models,’ i.e., the models are able to represent the experimental behaviour of the system when the exact description is missing or too complex . NNs adapt to any relation between input and output data on the basis of their supervised training. The characteristics that make NN systems different from traditional computing are – learning by example, distributed associative memory, fault tolerance and pattern recognition . The flexibility of NNs and their ability to maintain their performance even in the presence of significant amounts of noise in the input data are highly desirable [6,13], since perfectly linear and noise-free data sets are seldom available in practice, thus making them suitable for multivariate calibration modelling. There are some recent reports on the application of NNs for mixture analysis [14-18], though most of them employ a separate network for estimation of each component and involve synthetic binary mixtures for calibration.
Phenobarbitone (PBT) is a long-acting barbiturate and is an effective sedative anticonvulsant for the treatment of generalized tonic-clonic, simple-partial and complex-partial seizures. The drug is also effective in status epilepticus. Phenytoin sodium (PTN) is an anticonvulsant drug for the treatment of seizures. For the management of epilepsy, PBT and PTN combination tablets are available, containing the drugs in the ratio of 3:10. This paper presents a rapid and economical method for routine pharmaceutical quality control of this tablet dosage form by multivariate calibration based on soft modelling using back-propagation neural network.
Materials and Methods
Analytical reagent grade sodium hydroxide was used to prepare 0.01 M sodium hydroxide solution in distilled water, which then served as solvent for making the stock solutions, and all further dilutions of PBT, PTN, their standard combination and the tablet powder.
UV absorption measurements were carried out on a PerkinElmer Lambda 25 double beam spectrophotometer controlled by UVWINLAB software version 2.85.04, using matched 1.00 cm quartz cells. Class A volumetric glassware, such as pipettes and volumetric flasks, was used for the purpose of making dilutions. All weights were measured on an electronic balance with 0.01 mg sensitivity. Spectra of all the solutions were recorded against a blank solution containing no analytes, between 215 and 300 nm, and saved in ASCII format. All computations were carried out on a desktop computer with a Pentium 4, 1.6 GHz processor and 256 MB RAM.
Graphical user interface based neural network simulator named “Neuralyzer” was developed in-house by the authors [18-19], in Java programming language, capable of handling all the required tasks towards developing, training, validating the back-propagation neural network (BPNN) models and using them in analysis of the test samples’ spectral data. Analysis of an antihypertensive combination was already reported using this software  Major features of Neuralyzer include capabilities to generate the training (calibration) and monitoring set; design and configure the BPNN; train the neural network; use the monitoring error to stop training at an appropriate optional point; create a report of the analysis of the external validation spectra along with prediction parameters, analysis of test spectra; and report the concentrations of the analytes in the test sample. The trained BPNN model can be saved and loaded into Neuralyzer later at any time for analysis.
Preparation of standard solutions
Standard solutions of pure PBT and PTN were made at different concentration levels ranging from 2 to 10 mg/l and 6 to 21 mg/l respectively for the purpose of linearity determination and to design the calibration data matrix from their spectra. The analytical levels of 4 and 13.33 mg/l were chosen for PBT and PTN respectively. The absorbance spectra around the analytical levels chosen for the two standards are shown in fig. 1.
Figure 1: UV spectra of PBT and PTN
Overlain spectra of (a) PBT (····) at concentration of 3.44 mg/l and (b) PTN (—) at concentration of 10.63 mg/l in 0.01M sodium hydroxide. The relative concentrations reflect the chosen analytical levels and the typical ratio of the two drugs in a commercial tablet dosage
Synthetic binary mixtures for model evaluation
The preparation of synthetic binary mixtures from fresh stock solutions of pure PBT and PTN was spread over 9 different days, each day by separate weighing, in 0.01 M sodium hydroxide. Standard mixtures of the components were prepared with the concentrations lying within the known linear absorbance-concentration range by dissolving varying proportions of PBT and PTN stock solutions; the concentration of PBT varied between 50 and 180% of the test level concentration, while that of PTN varied between 50 and 150% of its test level concentration. The dataset obtained from the spectra of the mixtures was designated as T. The concentrations of components were selected to span the mixture space fairly evenly, as shown in fig. 2.
Figure 2: Synthetic binary mixtures for the evaluation of the neural network calibration models
Each point represents a mixture at the respective concentration of the components. The design ensures that the model is thoroughly validated in a well-distributed concentration space, especially with regard to chosen analytical level
Tablet solutions for analysis
For the analysis of the active components of the anticonvulsant tablet (Phenytal, PBT 30 mg and PTN 100 mg, Intas Pharmaceuticals, India, Batch No: C014), 20 tablets were accurately weighed, carefully powdered and mixed. Tablet powder corresponding to the equivalent of 45 mg of PTN was dissolved in 0.01 M sodium hydroxide solution by sonication for 5 min and made up to 100 ml. The solution was centrifuged and 3 ml of supernatant was diluted to 100 ml. Three replicate dilutions were made from each stock solution, repeating the entire process for a total of five weights of the tablet powder.
For accuracy studies, by recovery, the same tablet powder was used in amounts corresponding to the equivalent of 30 mg of PTN (in order to enable spiking up to desired levels). The powder was then spiked with a known quantity of pure PBT and PTN and dissolved in 0.01M sodium hydroxide by sonication and made up to 100 ml with the same solvent. The solution was then centrifuged and 3 ml of supernatant was diluted to 100 ml. A total of five powder samples were spiked to different levels in the range of 80 to 120%, each in three dilution replicates.
Since the spectra were additive linearly in the desired range and no serious baseline problems or interactions were found and since majority of chemometric techniques for regression and calibration do assume linear additivity, the process described below was adopted in the design of calibration data set for training the BPNN, as shown in Table 1. Three spectra of each component at three different concentration levels (low, medium and high) were employed in all possible combinations to provide a fair simulation of calibration data set with some degree of experimental variation. The calibration dataset spanned concentrations in the range of 2 to 8 mg/l at an interval of 1 mg/l for PBT, and 6 to 20 mg/l at an interval of 2 mg/l for PTN. All the target concentrations in the calibration set were then normalized to lie between 0.1 and 0.9. Thus a full factorial design was employed to obtain 56 training pairs from each spectral pair, resulting in a total of 504 training pairs (56 × 9), representing the mixture space evenly with target concentrations that were orthogonal and constituted the complete calibration set used to train the BPNN. Spectral region between 220 and 260 nm was chosen on the basis of visual inspection of the spectra, which showed extensive overlap. Absorbance values at every 1 nm interval in the selected spectral region served as the input values (41 absorbance values from the corresponding wavelengths) for the model. Three calibration datasets (C1, C2 and C3) were created using different pairs of spectra of PBT and PTN standards. Each calibration dataset differed from the other by the use of at least two different pairs of spectra used in their computation.
|Number of sigmoid hidden neurons||2||3||4||5||6|
|Calibration model||BPNN replicate||% mean RPE||% mean RPE||% mean RPE||% mean RPE||% mean RPE|
Table 1: Optimization Of Bpnn Calibration Model Based On Mean % Rpe
Randomized monitoring data sets were used for the internal validation and termination of the training of the BPNN at an optimum point to prevent over-fitting and to retain generalization ability of the network. Monitoring dataset essentially consisted of a different calibration dataset (with a different file extension) from the one used for the calibration (training) of the BPNN model.
BPNNs had an input layer with neurons corresponding to the number of wavelengths in the selected range of spectra, variable number of neurons in the hidden layer and two neurons in the output layer corresponding to the two components of interest. The input and output layer nodes had a linear transfer function while only the hidden layer nodes had sigmoid transfer function for the BPNN, decided on the basis of our earlier study on neural models . Optimization of network, to achieve generalization of the model and avoid over-fitting, was done starting with two neurons in the hidden layer and gradually increasing the number till no significant improvement in performance [<2% in mean percentage relative prediction error (% RPE)] in the network was achieved.
Training the BPNN
BPNNs were trained using the popular gradient descent algorithm which performs a steepest-descent minimization on the error surface in the adjustable parameters hyperspace as described and popularized by Rumelhart and McClelland . The algorithm has been presented in the tutorial review by the authors on NNs . All the BPNNs were trained with a learning rate of 0.1 and a momentum of 0.5, based on earlier experience. The training was monitored to prevent memorization (overfitting) with the corresponding monitoring set after every 100 epochs (since a fairly large monitoring set equal in size to the calibration set was used) and terminated as soon as any of the following criteria was met: (a) the root mean square error of monitoring (RMSEM) rises while the root mean square error of training (RMSET) is lowering, (b) the RMSEM rises continuously for 1000 epochs or (c) the decrement between two successive RMSEM is below a pre-specified threshold of 1.0 ppm on the average per epoch, and the network assumed to have stabilized with adequate generalization capacity. The BPNNs were then frozen to prevent further training and preserve the weights.
Evaluation of the BPNN models
All trained BPNNs with different configurations were evaluated for their modelling capability by testing with the spectral dataset (T) obtained from the synthetic binary mixtures (fig. 2). The mean % RPE representing the combined error for the entire mixture was used to perform ANOVA and multiple comparisons between all the models developed with three different calibration datasets, and the optimum configuration for the calibration model to achieve best generalization and prevent over-fitting or memorization was chosen. The optimized model was characterized by its performance parameters such as % RPE and other regression parameters for the actual versus the predicted concentrations such as slope, intercept, residual standard deviation and the square of correlation coefficient (R2) for each component of the mixture.
Spectra recorded from the tablet solutions were analyzed by the optimized BPNN model and the concentrations predicted for each solution were used for calculation of the tablet content. Similarly PBT and PTN concentrations in the solutions prepared for recovery study were also obtained from the respective spectra and percentage recovery was calculated to determine the accuracy of the method.
Results and Discussion
The overlaid absorption spectra in fig. 1 show complete spectral overlap, which complicates the determination of the individual drug concentrations from a spectrum of a mixture. When considered separately, concentrations between 2 and 10 mg/l for PBT and 6 and 21 mg/l for PTN were found to be linear, with r2 of 0.9999 and 0.9998 for each, slopes of 0.0391 and 0.0647, intercepts of 0.0005 and -0.0081 and residual standard deviation about the regression line being 0.0008 and 0.0056 respectively.
There are many pitfalls in the use of calibration models, perhaps the most serious being variability in instrument performance over time. Each instrument has different characteristics and on each day and even each hour, the response may vary. Therefore it is necessary to reform the calibration model on a regular basis, by running a standard set of samples, possibly on a weekly basis . Like other regression methods, there are constraints concerning the number of samples, which at times may be limiting the development of an NN model. The number of adjustable parameters (synaptic weights) is such that the calibration set is rapidly over-fitted if too few training pairs are available, leading to loss of generalization ability. Therefore, calibration sets of several hundred training pairs may often be necessary to get a representative distribution of the concentration across their range. This makes it expensive in time and resources to develop calibration mixtures physically in such large numbers; this is rarely possible in routine laboratory studies and justifies our attempt to use mathematically constructed calibration data set from individual spectra of components.
In order to simplify the process of model development, absorbances between 220 nm and 260 nm wavelengths at 1 nm interval were chosen by visual inspection of spectra, resulting in 41 absorbances at these wavelengths as inputs. Each BPNN was trained five times with random initialization of weights for each of the three calibration datasets, and the mean % RPE was calculated by averaging the % RPE for both the components using the test dataset (T) derived from the binary synthetic binary mixtures. The root mean square errors  and the % RPE for each component were calculated using the formulae; Mean square error
Root mean square error (RMSE) = √MSE ; %RPE = (100 × MSE)/Mean concentration, where, ysi is the i-th component of the desired target Ys, outsi is the i-th component of the output produced by the network for the s-th input vector, m is the number of input vectors or samples and n is the number of output variables (the number of components in the mixture in the present case). The results are presented in Table 2. The hidden neurons counts were optimized on the basis of the mean % RPE produced by each of the BPNN on the test dataset from binary synthetic mixtures. Three hidden neurons were found to be optimal, beyond which there was no significant difference in the performance of BPNNs as found by ANOVA. The least number of hidden neurons was chosen to minimize the number of adjustable parameters (weights) as a rule of thumb in NN training. Hence, the optimal configuration for the BPNN model had 41 input neurons, 3 hidden neurons and 2 output neurons.
|Calibration data seta||Test data set||% RPE||Slope||Intercept||Res. SDb||R2|
Table 2: Phenobarbitone Concentration Prediction Characteristics by Back-Propagation Neural Network Models from Synthetic Binary Mixtures
The optimal configuration of BPNN model was evaluated for its robustness by training the network using three different calibration sets and monitoring sets and investigating their prediction characteristics using 180 spectra (including replicates) obtained from 95 synthetic binary mixtures. (In all a total of 99 mixtures were prepared and scanned, out of which 4 mixtures found to be consistent outliers, due to pipette and transfer errors, were eliminated.) The regression characteristics of the predictions are listed in Tables 2 and 3 for PBT and PTN respectively, and the residual plots versus fits are shown in fig. 3. These results imply that the BPNN model performed well irrespective of the calibration data set and hence was rugged enough for periodic calibration, this being necessitated by conditions such as variability in instrument performance.
|Calibration data setsa||Test data set||% RPE||Slope||Intercept||Res. SDb||R2|
Table 3: Phenytoin Sodium Concentration Prediction Characteristics by Back-Propagation Neural Network Models from Synthetic Binary Mixtures
Spectra obtained from 30 tablet solutions (including replicates) prepared from five different weight samples, as described in the experimental section, were analyzed by the BPNN model and the average content was calculated. The results are summarized in Table 4. The accuracy of the method for analysis of tablets was further investigated using the recovery studies as described earlier. The mean percentage recovery with BPNN model was 99.41 and 100.14 with relative standard deviation of 0.604 and 0.473 for PBT and PTN respectively, which is very well acceptable and reflects the models reliability without doubt in the analysis of these tablets. The results are shown in Table 5.
|Sample 1 (mg)||31.01||31.04||30.17||98.28||99.51||99.19|
|Sample 2 (mg)||30.93||30.82||30.21||100.70||101.62||101.44|
|Sample 3 (mg)||31.23||31.51||30.82||98.20||99.00||98.90|
|Sample 4 (mg)||30.30||30.16||29.76||99.24||99.95||99.92|
|Sample 5 (mg)||30.34||30.58||30.44||98.44||98.62||98.76|
|Mean tablet content (mg)||30.76||30.82||30.28||98.97||99.74||99.64|
|Relative Std deviation||1.364||1.638||1.283||1.061||1.170||1.106|
|Amount on the label (mg)||30.00||30.00||30.00||100.00||100.00||100.00|
|% of the reported content||102.54||102.74||100.94||98.97||99.74||99.64|
Table 4: Tablet Analysis Results Using Bpnn Models
|Relative standard deviation||0.604||0.473|
Table 5: Results Obtained For Recovery Studies Using The Bpnn Model
Multivariate calibration techniques utilize large number of variables from a problem domain to obtain a solution and hence are highly reliable and robust. NNs’ ability to learn and derive absorbance-concentration relationships from presentations of a set of training samples combined with the fundamental principle of distributing information among several weights and nodes renders the NN model robust with respect to random noise in the input data and allows one to have several NNs with different topologies converging to qualitatively equivalent results . The BPNN model developed in this study performed well when tested with spectra recorded on different days and exhibited ruggedness even when different sets of constructed calibration data were used in the model development, as indicated by the validation results using the spectra of a vast number of binary synthetic mixtures. The study revealed that back-propagation model can be successfully used for multivariate calibration for the analysis of the anticonvulsant formulation using UV spectrophotometry, economically and rapidly in spite of the extreme overlap of the spectra of components. The BPNN model can be quickly calibrated whenever the spectrophotometer performance characteristics alter, using only three pairs of the spectra of the individual components, offering considerable advantage over classical methods.
The authors are grateful to the All India Council for Technical Education, New Delhi, for providing financial assistance under an R&D scheme to carry out this work. We are grateful to Medopharm Ltd., Chennai, India, for the generous donation of drug samples used in this study.
- Goicoechea, H.C., Olivieri, A.C., J. Pharm. Biomed. Anal., 20, 1999, 681.
- Dinc, E., Baleanu, D., Onur, F., J. Pharm. Biomed. Anal., 26, 2001, 949.
- Dinc, E., Serin, C., Tugcu-Demiroz, F. and Doganay, T., Int. J.Pharm., 250, 2003, 339.
- Ustundag, O. and Dinc, E., Pharmazie, 58, 2003, 623.
- Hornik, K., Neural Network, 1991, 4, 251.
- Fausett, L., In; Fundamentals of Neural Networks: Architectures,Algorithms and Applications, 1st Edn., Prentice Hall Inc, New Jersey,1994, 1.
- Schalkoff R.J., In; Artificial Neural Networks, 1st Edn., McGraw-Hill,New York, 1997, 1.
- Zupan J. and Gasteiger J., In; Neural Networks in Chemistry andDrug Design, 2nd Edn., Wiley-VCH, New York, 1999, 156.
- Sathyanarayana D., Kannan K. and Manavalan R., Indian J.Pharm. Edu., 2004, 38, 123.
- Sathyanarayana D., Kannan K. and Manavalan R., IndianJ.Pharm. Edu., 2005, 39, 5.
- Sathyanarayana D., Kannan K. and Manavalan R., Indian J.Pharm. Edu., 2005, 39from, 62.
- Cukrowska, E., Cukrowski, I. and Havel, J., S. Afr. J. Chem.,2000, 53, 213.
- Despagne, F. and Massart, D.L., Analyst, 1998, 123, 157R
- Yin, C., Shen, Y., Liu,S., Yin, Q., Guo, W. and Pan, Z., Comput.Chem., 2001, 25, 239.
- Ni, Y., Liu, C. and Kokot, S., Anal. Chim. Acta., 2000, 419, 185.
- Absalan G. and Soleimani, M., Anal. Sci., 2004, 20, 879.
- Balamurugan, C. Aravindan, C., Kannan, K., Sathyanarayana, D.,Valliappan, K. and.Manavalan, R., Indian J. Pharm. Sci., 2003, Publications 65, 274.
- Sathyanarayana, D., Kannan, K. and Manavalan, R., Indian J.Pharm. Sci., 2004, 66, 745.
- Sathyanarayana,. D., Kannan, K. and Manavalan, R., Annamalai University J. Engg. Tech., 2005, Diamond jubilee special issue, 168-172.
- Rumelhart, D.E., Hinton, G.E. and Williams, R.J., In; Rumelhart, D.E.Medknowand McClelland, J.L., Eds., Parallel Data Processing: Explorations in the Microstructures of Cognition, 1st Edn., Vol. 1, The M.I.T. Press,Cambridge, MA,1986, 318.
- Brereton, R.G., Analyst, 2000, 125, 2125.
- Zupan, J., Acta. Chim. Slov., 1994, 41, 327.