pakrb wrote:How many different scripts do you typically use in a PerformanceTest project?
Very good question.
Even though eValid can run a different script in every eValid browser user (BU) instance, in practice you typically don't do that.
Instead we find that you typically either:
(A) choose a longer script that incorporates all of the different kinds of things you are going to be done, and then replicate that one 100's or 1,000's of times. The idea is that if you run this in parallel enough, then the workload imposed is a realistic one.
(B) Pick a very small number of simple scripts -- maybe 5-10 maximum -- each of which is focused on one particular feature or function, and run them in fixed proportion on each driver machine. For example, 10% of this one and 5% of that one and 25% of that other one, etc.
In either case, the goal is the same: have a mix of load that is as realistic as possible.
The eValid Team