Monthly Archives: June 2018

A new project

So we are going to start a new focus for this Blog: reconverting it to a more technical one. Although it has some interesting technical stuffs in the backend (as some integration with different APIs and RSS which where updating in an automatic way the information) and a selfpublishing bot to share the information in the social networks (facebook and also twitter) this time we are going to pay special attention to technical issues. Instead of the automatic and selfpublished news about Gomeznarro neighbourhood 🙂

Anyway I think I’m going to maintain most of the articles and also the domain name. Reason? Tradition and love to the first project 🙂

In this first articles we are going to talk about a new technical and code development project I’m working on it. The idea is to make, in a first phase, a crawler to get configurations from STP and DRAs in a network. Perhaps in a second one we can also study how to configure it, but by the moment read-only.

And why are we interested in extract the information from STPs/DRAs in an automatic way? Well… we are interested in integrate it with three technologies:

1) A BigData mechanism (including Machine Learning associated)
2) A Bot (chatbot)
3) And an inventory alligner

But, we can start by the begining:

What is a STP (and DRA?). A STP is a router to communicate SS7/Sigtran information. We are talking about routers of an International Network so they communicate traffic between Roaming Parners. It is a so called International Roaming Carrier. So a Mobile Operator communicate at an international level with others Mobile Operators using these STPs (or DRAs if we are using 4G/LTE technology).

So, for instance, if we are in a foreign country with our handsets and we try to do a call or use an application (like Whatsapp) we are going to be registered in the operator of the foreign country. To communicate between the operator of the user (called Home operator) and the one of the foreign country you are going to need some path. This path is created through this “Routers” (STPs).https://en.wikipedia.org/wiki/Signal_Transfer_Point

Literally as you can see there it means Signal Transfer Point.

And why are we interested in extract this information? If we connect that with a Big Data system we can predice incidents, problems or capacity leaks. Also we can detect fraud in a better way or analyze traffic data.

If we integrate it with a Bot we can dialogue with our STP as with a person: receive alert messages or avoid difficult commands for the operators and management staff.

Finally, if we syncronize the configuration of the STPs with the inventory tool (Netcracker) we can allign the information between both worlds and avoid misallignment.

So ok, we get it. This STPs are manufactured by Oracle, and Oracle sells its own application to do this (and also the BigData, the Bot, the inventory system etc.) Problem? Some millons of dollars. And here is where our idea starts: let’s see if we can avoid them.

And here we go. Our first model is some code based to extract and parse the configurations. This is what we are going to talk today about.

Once we get this model we are going to translate it to a HP Orchestator in order to convert it in some kind of BUS and automatize the function. We can also automatize it using a crontab, but well. Orchestator is our new standard.

By the moment my code is Linux based. Once this is done Orchestator will be integrated with the Bot (through the API) and the configuration information, in a .csv format, will be upload to the BigData system using also an API or a SFTP (depend of the phase of the project).

So well, let’s talk about the code.

We have used a mix of languages:

– Expect, to extract the configuration
– Perl using regex, to parse it in a vertical way (lines)
– Awk, to parse it in an horizontal way (columns)

In this first phase we are going to extract the maxim values configured in the linksets (interconnections) of my signalling cards.

We proceed in this way:

First using expect code (remember this is only a model):

set output [open “outputfile2.csv” “w”]
sleep 2
expect “Command Executed”

send “rept-stat-iptps\r”
expect “Command Executed”

set outcome $expect_out(buffer)
puts $output $outcome
sleep 10
close $output
# now interact with the session
interact

So we are using the rept-stat-iptps command to get all my traffic stats. We have stored it in a first file.

With a second expect code we extract the configuration of all the linksets of a sigtran card.

set output [open “outputfile2.csv” “w”]
sleep 2
expect “Command Executed”

send “rept-stat-iptps\r”
expect “Command Executed”

set outcome $expect_out(buffer)
puts $output $outcome
sleep 10
close $output
# now interact with the session
interact

Once we get bot files we start a matching process between both. We use perl to do this (I love perl):

while (my $linea = )
{

if($linea =~ /IPLNK STATUS/){
$flag = 0;
}

chomp($linea);
if($flag == 1){
$palabras = substr($linea,36,10 );
print FICHERO2 $palabras.”\n”;
print $palabras.”\n”;
print FICHERO2;
}

if($linea =~ /nictef nictef /){
$flag = 1;
}

while (my $linkset = )

{
print $linkset.”\n”;
open FICHERO, $fichero or die “No existe “.$fichero;
while (my $linea = )
{

#print $linea.”\n”;
chomp($linkset);
if($linea =~ $linkset ){
chomp($linea);
print $linea;
print FICHERO3 $linea;
$linea = ;
print FICHERO3 $linea.”\n”;
}

}
close FICHERO;
}
close FICHERO;
close FICHERO2;
close FICHERO3;

As you can see two standard codes armed with regex in order to select the text we are interested on.

With this we are going to get a file with a format like the next one:

synviphus 40% 2500* 5000 TX: 129 290 18-01-20 16:29:21
RCV: 628 1584 18-04-04 05:06:21

synviphus 40% 2500* 5000 TX: 129 290 18-01-20 16:29:21
RCV: 628 1584 18-04-04 05:06:21

synviatus 40% 2500* 5000 TX: 82 170 18-04-28 05:13:21
RCV: 542 908 18-05-15 17:05:40

synviatus 40% 2500* 5000 TX: 82 170 18-04-28 05:13:21
RCV: 542 908 18-05-15 17:05:40

And now we format it using awk

#!/bin/bash

echo “linkset;CONFIG RSVD; CONFIG MAX; TPS; PEAK; TIMESTAMP; TIMESTAMP” >> tablalinkset.csv
awk ‘{print $1,”;”,$3,”;”,$4,”;”,$6,”;”,$7,”;”,$8,”;”,$9}’ tablafinal.csv >> tablalinkset.csv

With a sort command and so on we also check and erase possible duplicated because sometimes a linkset receives same description.

And with this we have generated a database with our configuration. CSV format:

linkset;CONFIG RSVD; CONFIG MAX; TPS; PEAK; TIMESTAMP; TIMESTAMP
synviphus ; 2500* ; 5000 ; 129 ; 290 ; 18-01-20 ; 16:29:21
; ; ; ; ; ;
synviphus ; 2500* ; 5000 ; 129 ; 290 ; 18-01-20 ; 16:29:21
; ; ; ; ; ;
synviatus ; 2500* ; 5000 ; 82 ; 170 ; 18-04-28 ; 05:13:21
; ; ; ; ; ;
synviatus ; 2500* ; 5000 ; 82 ; 170 ; 18-04-28 ; 05:13:21

So with this first configuration we are going to be ready to feed the bigdata system and the bot. Of course following this standarized process we get more information and use it to feed the system.

We keep you informed about the next steps.

Juan de la Cruz
@jdelacruz_IoT