Tbstatus monitoring
Tbstatus monitoring is a ruby script which makes use of the tbstatus API to gather statistics on a system. It periodically collects statistics according to a configuration file (tbstatus.yml). Results are presented in .csv files per selected module. This includes any statistics that tbstatus API can gather. E.g. trunks, signaling links status, ISUP interface status, NAPs status, calls status, etc.
Note: tbstatus.rb ruby script is different from the tbstatus command-line tool.
Contents |
Download the script archive
Download the script here: media:tbstatus_monitor.zip.
This archive contain the gzip tar archive named tbstatus_monitor.tgz:
- tbstatus.rb : Status script.
- tbstatus.yml : Configuration file
- Instructions.txt : Quick instruction
Copy the script archive to TMG
- Transfer the .tgz file containing the script tbstatus.rb, and the configuration file tbstatus.yml to the host of TMG800, TMG3200 or TMG7800-CTRL using sFTP (Filezilla or Winscp)
- Uncompress the file
tar xzf tbstatus_monitor.tgz cd tbstatus_monitor
Script Usage
[Tbstatus_monitor]# ./tbstatus.rb Usage: tbstatus.rb [path] path: '/*', /nap or any other supported path Usage: tbstatus.rb [GROUP] GROUP: any defined GROUP in yml config file in CAPITAL letters (:all to print them all) Usage: tbstatus.rb [command] command: :all to print statistics for all defined GROUPS :dump to print all statistics for all supported paths Usage: tbstatus.rb -d Goes into deamon mode, and start logging system statistics according to YML config file CFG: -f can be used as first argument to specify YML cfg file
Adjust configuration file
Change the configuration file to select the required monitoring modules. This can be done by editing the tbstatus.yml file, you will see the below lines within.
- To specify the output .csv file name
slog_file: DATABASE.csv
- Or, output in a SQLite database file
slog_file: DATABASE.sqlite
- To specify the file rotation period
slog_rotation_period: daily
- To specify the statistics gathering interval
slog_update_interval: 15m
- Configure the required module statistics
Comment out the lines (statistics) which are not required with a "#" sign. This reduces the number of files and the columns within files generated.
Below example gathers the ISUP interface cic group statistics and put the result in ISUP_INTERFACE_CIC_GROUPS.csv files:
- ISUP_INTERFACE_CIC_GROUPS: slog_file: ISUP_INTERFACE_CIC_GROUPS.csv slog_rotation_period: daily slog_update_interval: 15m paths: - /isup/interface/cic_group: # - desired_group_state # - start_continuity_check # - interface_down - idle_cnt - incoming_cnt - outgoing_cnt - locally_blocked_cnt - remotely_blocked_cnt - locally_remotely_blocked_cnt - reset_cnt - suspended_cnt
To get the list of available path or a short description of each available statistics fields:
Instead of .csv files, statistics can be saved in a SQLite database file. Below example gathers the NAPs statistics:
- NAPS: slog_file: NAPS.sqlite slog_rotation_period: daily slog_update_interval: 10s paths: - /nap: - available_cnt - unavailable_cnt - availability_percent
Execute the script in deamon mode
- Change the script file permission
chmod +x tbstatus.rb
- Execute tbstatus script in daemon mode
nohup ./tbstatus.rb -d &
- To stop it, kill the process
[root@TB007036 ~]# ps -ef | grep tbstatus root 20800 15165 0 04:17 pts/0 00:00:01 /usr/bin/ruby ./tbstatus.rb -d root 23191 15165 0 04:46 pts/0 00:00:00 grep tbstatus [root@TB007036 ~]# kill -9 20800
Execute the script on the Web Portal
Alternatively, you can start the tbstatus monitor tool in with Toolpack. Therefore, the script will start and stop automatically at the same time as Toolpack service:
- On the Web Portal, Go to Host -> Applications -> Create New Application
Name: tbstatus_monitor Application Type: user-specific bin path: /root/tbstatus_monitor/tbstatus.rb working path: /root/tbstatus_monitor Command line arguments: -f tbstatus.yml -d
Collect the data in .csv files
- Multiple .csv files will be created in the same directory. They will be rotated and zipped according to the yml configuration. These files can be extracted from the unit with sFTP, or SSH scp commands to be analyzed by an external system.
Example files:
ADAPTER_IP_INTERFACES.csv ADAPTER_LINE_INTERFACE_LINE_SERVICES.csv ADAPTER_LINE_INTERFACES.csv ADAPTER_SENSORS.csv ADAPTER_USAGE.csv DATABASE.csv ISUP_INTERFACE_CIC_GROUPS.csv ISUP_INTERFACES.csv MTP2_LINKS.csv MTP3_LINKS.csv MTP3_LINKSETS.csv MTP3_ROUTES.csv NAPS_24hour_data.csv NAPS.csv
The result of ISUP_INTERFACE_CIC_GROUPS.csv with the above configuration would be:
date,time,path,item_name,idle_cnt,incoming_cnt,outgoing_cnt,locally_blocked_cnt,remotely_blocked_cnt,locally_remotely_blocked_cnt,reset_cnt,suspended_cnt "09/17/2014","22:24:22",/isup/interface/cic_group,C011107_00,30,0,0,0,0,0,0,0 "09/17/2014","22:24:22",/isup/interface/cic_group,C011107_02,31,0,0,0,0,0,0,0 "09/17/2014","22:24:22",/isup/interface/cic_group,C011107_04,31,0,0,0,0,0,0,0 "09/17/2014","22:24:22",/isup/interface/cic_group,C011107_05,31,0,0,0,0,0,0,0 "09/17/2014","22:30:00",/isup/interface/cic_group,C011107_00,30,0,0,0,0,0,0,0 "09/17/2014","22:30:00",/isup/interface/cic_group,C011107_02,31,0,0,0,0,0,0,0 "09/17/2014","22:30:00",/isup/interface/cic_group,C011107_04,31,0,0,0,0,0,0,0 "09/17/2014","22:30:00",/isup/interface/cic_group,C011107_05,31,0,0,0,0,0,0,0 "09/17/2014","22:45:00",/isup/interface/cic_group,C011107_00,30,0,0,0,0,0,0,0 "09/17/2014","22:45:00",/isup/interface/cic_group,C011107_02,31,0,0,0,0,0,0,0 "09/17/2014","22:45:00",/isup/interface/cic_group,C011107_04,31,0,0,0,0,0,0,0 "09/17/2014","22:45:00",/isup/interface/cic_group,C011107_05,31,0,0,0,0,0,0,0
Columns: date, time, path and item_name are always there. All other columns depends on the selected fields in the configuration file.
Lines: one per paths/item per update interval; in that example, there is 4 CIC groups. Note that the first entries are written at 22:24 because the script was started at that time.
Access data in the SQLite database
- Check for the database file which matches the configuration in tbstatus.yml. Following the example above, it is:
NAPS.sqlite
- Check for the database table name
[root@165 tbstatus_monitor]# sqlite3 NAPS.sqlite '.tables'; /nap
- Access the statistics in the database file
[root@165 tbstatus_monitor]# sqlite3 NAPS.sqlite 'SELECT * FROM "/nap" ORDER BY oid DESC LIMIT 10'; 2014-05-26 15:44:51|TBSERVER|1500|0|100 2014-05-26 15:44:51|SS7_NAP|30|0|100 2014-05-26 15:44:50|SIP_NAP_1PLUS1|1500|0|100 2014-05-26 15:44:50|ISDN_1_1|30|0|100 2014-05-26 15:44:50|CANDY|0|1500|0 2014-05-26 15:44:40|TBSERVER|1500|0|100 2014-05-26 15:44:40|SS7_NAP|30|0|100 2014-05-26 15:44:40|SIP_NAP_1PLUS1|1500|0|100 2014-05-26 15:44:40|ISDN_1_1|30|0|1