PhilipR wrote:Afternoon.
We've been working with automated testing for some time and eValid appears to be a really neat and self-contained solution worth pursuing.
Sometimes we have trouble getting a recording from life (as you phrase it) to work reliably.
Any ideas?
Thanks for posting PhilipR.
From the very beginning we understood that extracting events by monitoring the DOM in the browser was probably never going to produce completely reliable recordings.
That is, what you record is what you play back, every time.
There are too many things going on in a modern browser to be able to 100% guarantee that the action you record is going to play back the same way.
Too many state changes involved -- that's the technical reason.
So we understand the difficulty, and we prepared the following guide to help you when making a recording:
http://e-valid.com/Products/Documentati ... guide.htmlThe key information is in the right-most column, which shows you what you should see in the script window during recording after you take each step.
But we have a standard caveat on this page: "your results may vary".
When we set up to do a recording this kind of action/response experience is our guide to guaranteeing that a recorded script will be reliable.
On playback the first time we also use the Pause/Resume or "Single Step" mode and watch not only the behavior on the screen but also the content of the EventLog...which reflects how the browser responses to the input.
You basically, at this point, have three screens up on your display: the script (which you single-step through), the browser face(s), and the EventLog.
Our experience is that if you study how your web page works using these three pieces of information you can, with experience, engineer a very reliable test script that is also effective in your QA effort.
-- eValid Support