Ruby and me [part 1]

This might be a long story cut in pieces since I have been trying to learn ruby on my own by doing some research as well as doing some projects since I don't want to just forget how to code since in many occasions has saved my life on doing several things.

In this occasion, I have been doing some coding in PHP to facilitate my work, which in the case was to perform the following:

Automated Monthly Reports

Why? well in my current position there are several requirements that we need to provide the client on a monthly basis and in some cases weekly basis. In my case its monthly so we are not so bad, but in any case; this process of creating the monthly report includes but not limited to:

  1. Log in into 5 web applications
  2. Grab / grep information from those 5 web sites
  3. Download the information and perform some math around it
  4. Do some graphing in Excel, in others copy and paste it.
  5. Select items (application servers) from each web application, in other word select the item to compute the information from and get the appropiate values
  6. Download the graphs and values
  7. Log out

In many of this cases, getting the information from each web application can take some time, since the data center where the information relies can change from one year to the other or if the server outgrew the current farm or the application servers for the client changed. So in any case, the information that you know where to go, it can quickly become obsolete.

So looking at this opportunity, found some nifty tricks using curl, which i knew nothing about but there is something nice in learning new things that maybe can facilitate your life.

After reading and learning a bit more, I switch hats and became a developer once again with the basic information that I knew of php and did some awful work using some bad coding practices, which later were refined.

The end product was not nice, neither was secure, but got the job done.

Here you can see that I used sessions variables:

//systems
$SESSION['url'] = $systems['upurl1'];
$SESSION['appip'] = $systems['appip'];
$SESSION['dbip'] = $systems['dbip'];
$SESSION['vip'] = $systems['vip'];
$SESSION['truesightserver'] = $systems['tsurl'];
$SESSION['tswatchpointid'] = $systems['tswatchpointid'];
$SESSION['upurl'] = $systems['upurl1'];
$entityarray = explode(",", $systems['upentityids1'] );
//clients
$SESSION['clientid'] = $clients['clientid'];
$SESSION['productid'] = $clients['productid'];
$SESSION['type'] = $clients['clientid'];
$SESSION['name'] = $clients['clientid'];
//users
$SESSION['username'] = $users['username'];
$SESSION['password'] = $users['password1'];
$SESSION['encodedpw'] = $users['password2'];
And I tried to learn a bit of MVC, so I created classes at least of each system that it was being engaged:
$is = new Insight();
$isfiles = $is->run();
In the class per se, you can see how not safe I was doing it:
class Insight{
    function run(){
        //get variables
        $clientid = $SESSION['clientid'];
        $productid = $SESSION['productid'];
        $databaseip = $SESSION['dbip'];
        $applicationip = $SESSION['appip'];
        $month = $SESSION['month'];

Then the obvious thing to do, is build the variables and paths that you need:
$urltotallogins = 'https://***/cgi-bin/AM/graphs/graphusage.g1.php?ip='.$databaseip.'&type=TOTALLOGINS&clientid='.$clientid;

$urlpercentilegraph = 'https://***/cgi-bin/AM/bw/95usagegviptest.php?cid='.$clientid.'&bwyearmonth=20110&vipyearmonth=201101'; $cookieFileName = "cookies/insightcookie-".$clientid."-".$month.".txt"; $file95usagevip = "report/".$clientid."img95usagegvip".$month.".png"; $filetotalloginsvip = "report/".$clientid."imgtotallogins_".$month.".png";

//set variables for file

$baseurl = 'https://***/cgi-bin/AM/downloadstats'; //this files contains all the statistics for given month $filecomplete = $baseurl.'.php?clientid='.$clientid.'&ip='.$databaseip; $filefileSystemUsage = $baseurl.'.php?clientid='.$clientid.'&ip='.$applicationip.'&mount=/usr/local/blackboard&type=FSUSAGE'; $filedatabaseUsage = $baseurl.'.php?clientid='.$clientid.'&ip='.$databaseip.'&mount=/usr/local/blackboard/cms&type=DBFSUSAGE'; $filetotalLogins = $baseurl.'1.php?clientid='.$clientid.'&ip='.$databaseip.'&type=TOTALLOGINS';

But then the funny part was using the DRY (Don't Repeat Yourself) way of thinking, i also created the class of CURL, that I called SC, and executed something like this:
//create class of SC (session curl)
$sc = new SC();
//execution 1.1 - get images
$sc->download($urltotallogins, $cookieFileName, $filetotalloginsvip);
$sc->download($urlpercentilegraph, $cookieFileName, $file95usagevip);
Fairly simple right? Well the complicated part was to identify the common stuff and different patterns that the urls were using so getting the images and information was kind of funky.

For documentation purposes the entire CURL class is below:

Fairly straight forward putting cookies, following information and doing the correct paths or downloading the image or file for that matter.

As mentioned before, this was just a proof of concept that it worked, now that I have been working with Ruby, its just quite different.

The different story.

Learning Ruby to do all of this has definetely become a challenge, and I don't know if at this time I have met an Engineer that does not like one. I reading a bit and coding some more (things that I will be documenting later), but for know some references.

The few things a long the way that I have found interesting and need to use are:

  • CURB
  • Nokogiri - at first i read it wrong and thought of "NokoGirl" -- small dumb joke

Those two tools will definetely make my life easier since now I can scrape pages and read the information in a more comprehensive manner. Part two will show you where I stand... shortly.

Written on February 25, 2014