Log Analyzer

ElasTest's Log Analyzer service provides an advanced tool for analyzing any log gathered during any finished or running TJob execution.

You can load the logs directly from the page of any TJob execution:

Or load so many TJob executions logs as you want by clicking on "Log Analyzer" menu button. A dialog will pop up to let you select the specific TJob execution(s) you want to analyze. To do so:

  1. Select the Project to which the TJob belongs
  2. Select the TJob
  3. Select all the TJob Executions you want to load

After clicking "OK" button, your logs will load into the Log Analyzer. Every entry is divided into many fields, which include:

  • timestamp: timestamp of the entry
  • component: component that generated the entry (SuT, TJob, TSS)
  • stream: specific stream that generated the entry (remember that one single component can generate different logs and metrics)
  • exec: TJob Execution to which the entry belongs
  • message: the log entry message
  • level: the logging level of the entry (DEBUG, INFO, WARNING, ERROR...)


You can order your columns as you want just by drag-and-dropping.

On the right menu you have the two main tools currently provided by ElasTest Log Analyzer: the Filter tool and the Mark tool.

Filter tool allows you to do a filtered load of entries. You can filter by:

  • Date: load logs within a certain time frame, applied to the timestamp of the entries
  • Component/Stream: load logs produced by certain component(s) for certain stream(s)
  • Level: load logs of certain level of logging
  • Message: load logs whose message filed contains certain word


Bottom options let you establish the number of entries you want ElasTest to load (the first ones that match your filters).

Add from last and Add from selected will append only the new entries starting from that specific entry.

Mark tool allows you to perform a coloured search in your loaded entries. You can then easily navigate between the entries for each search just by clicking on the arrow buttons for the desired serach.

There's a known issue when log entries have the same timestamp in the order of milliseconds (it can result in data appearing in random order).