There are four (4) main components to a well established training program which should be considered for your electronic quality system: procedures, hands on training, experienced / knowledgeable trainer(s), and a measure of training effectiveness.
When it comes to developing procedures (i.e. standard operating procedures, work instructions, quick reference guides, or training materials), I've found it beneficial to either create a specific section or separate document instructing how to manage the electronic quality system and another for working instructions. The procedure for managing the system should provide a high level philosophy; its intent, scope and objective while defining what needs to go into the system and what does not. Additionally, it should define the timing for activities and specify who is responsible for them. The work instructions will help guide users on how to navigate the system, define all of the fields within the system, and the expected data to be entered into those fields.
Regarding hands on training, it would be extremely beneficial to establish a sandbox or training enviornment for users to work in to understand the process. It is proved that individuals learn more through hands-on experience than by any other learning technique. The benefit of the sandbox or training enviornment is for individuals to learn at their own pace and without the risk of making mistakes in a production environment. If a sandbox or training enviornment is not available then at the very least a demonstration of the system should be incorporated into the training, which should include all possible avenues and respective outcomes that may occur.
One of the most important components is having experienced and knowledgeable trainer(s). These individuals should have an understanding of the system beyond just how it works, but they should know how and why the system was developed, not only from a quality (content) aspect but a technical one as well. This is key as your users raise their questions, concerns, and comments about the system. The better informed your users are, the more consistent and integral the data will be when entered into the system.
To measure training is another key component on how a system will succeed. If a training program is proven to be effective, it will definitely yield positive results, perhaps more than what is desired by the company. However, measuring the effectiveness of training is one of the biggest challenges of companies today. Training in itself is expensive, and adding more components to it may not be a good idea in terms of financial capacity or value. The measures used by most companies in a variety of industries may include one or more of the following: self-exams, On The Job (OJT) training, certifications, management audits of the program, etc. Ultimately, metrics should be sued to see the effectiveness of the training. It would not be wise to say that a training curriculum is good simply because the trainees pass the exam. What needs to be done is to check the metrics of these employees and see if they meet the expectations of the company. For example, when evaluating a deviation system, one may want to look at the consistencies or inconsistencies between a few of the same type of investigations performed by different investigators and reviewed/closed by different quality officials. This would definitely point out how well the training of these individuals has been and how well the electronic quality system was developed.
Thinking about these components to the training program and their individual and collective impact to the overall electronic quality system, I'm reminded of the phase "garbage in/garbage out'. To expound upon this, failures in human decision making due to faulty, incomplete or imprecise data will be costly from a financial and compliance standpoint. The difference between having a successful or unsuccessful electronic quality system will be, in part, on how well your training program is developed and implemented.