Tuesday, March 17, 2015

How to restore and preserve multiple Screen sessions?

Bookmark and Share


A new problem and another quick solution!

As usual while working on many server (imagine 20 Linux/Unix machines), it is often difficult to remember all machine names, their configuration and yet to keep all individual setups at fingertips.
The obvious choice on Linux is using SCREEN multiplexer.

But there are some limitations :(
What if you the machine running Screen sessions reboots or shuts down? Starting each and individual session after your screen session machine is up, is very time consuming and more or less very uncomfortable during stressful times ( using alternate wordings for being lazy :) )

I wrote following (very simple wrapper) scripts on top of Screen commands to bring up all sessions as quickly as possible with single Screen command. It should be noted that all machine names and other terms are generalized in below session commands.

[root@techsutram.com screen-sessions]# ls
Setup_1.ssh  Setup_2.ssh  Setup_3.ssh  Setup_4.ssh
[root@techsutram.com screen-sessions]# cat Setup_1.ssh
screen -t DRIVER_EXECUTION ssh root@launcher.techsutram.com
screen -t node1 ssh root@node1.techsutram.com
screen -t node2 ssh root@node2.techsutram.com
screen -t node3 ssh root@node3.techsutram.com
screen -t node4 ssh root@node4.techsutram.com

[root@techsutram.com screen-sessions]# screen -S _SETUP_1_ -c ~/screen-sessions/Setup_1.ssh


It will ask for passwords (if required) for each individual machines (^a" and select node? ).
Hopefully it helps! At least it will help me sometime in future.

Tuesday, November 4, 2014

Balanced Scorecard approach to Software Quality

Bookmark and Share

Few months back, I was introduced to Balanced scorecard concept by Rajul @Sunstone.
It essentially maps business strategy to customer, finance, internal processes and learning & growth perspectives. More information available at Balanced scorecard.

However, Balanced scorecard framework invoked a thought process to apply the same to Software Quality.  Below table applies Balanced scorecard to Software Quality. Measurement or tracking of each Key Performance Indicators (KPIs) could be a matter of debate as these could be tracked weekly, monthly, quarterly or yearly.

Formulae listed in the below table can be tweaked easily to individual needs. No guarantee of any sorts :).  Few of these KPIs are available on internet on different software testing or quality assurance forums. This article tries to put these in balanced scorecard framework.

Assumption:
  • We know how to calculate total cost of testing efforts  
  • There could be other KPI that individual leverage but I cannot list everything here. KPIs are for example purpose only.
 
Customer
Sr. No.
Objective
Measure
Target
Initiative
1.
Improve on features shipped
Number of Feature Request
Identify top 10 features requested by customers
1.     Analyze Escalations, mailing lists, sales inputs
2.     How many Escalations/mailing lists/sales inputs qualify as features?
2.
Reduce critical bugs in production
Number of critical bugs reported by customer
Reduce critical bugs to 10% with respect to previous release

Final goal: zero critical bugs in production
1.     Analyze Escalations, customer reported incidents, mailing list etc.
3.
Improve product delivery Cycle time
Automation productivity to expedite delivery
Improve the automation productivity consistently release over release.
e.g.
Following targets could be considered:
1. Improve by 100%
2. Improve by 80%
3. Improve by 50%
1.     Use following formula to track automation productivity:

= (Total number of automated tests) / total automation efforts
4.      
Improve product delivery Cycle time
Test Cycle time
Reduction in total testing time
1.     Use following formula to track reduction in total test cycle time

= Total testing downtime/ Total test execution time
5.      
Innovation
Number of new ideas generated
Improve on idea implementation into product
1.     Use following formula to track reduction in total test cycle time

= Number of new ideas (suggestions) /
Ideas (Suggestions) implemented





 Finance
Sr. No.
Objective
Measure
Target
Initiative
1.      
Reduce Software testing cost
Identify cost per test case
Reduce cost per test case
1.     Use following formula to track cost per test case

= Total cost / number of test cases
2
Reduce Software testing cost
Identify cost per automated test case
Reduce cost per automated test case
1.     Use following formula to track cost per test case

= (Total automation cost) / (number of automated test cases)
3
Reduce cost of minor releases
Identify release cost
Reduce cost of minor release as compared to previous minor releases
1.     Use following formula to track cost per release

= Minor (No. Of release defects filled + No. Of release resources) / Major (No. Of release defects filed + No. Of release resources) + Minor (No. Of release defects filled + No. Of release resources)

2.     Consistently track the output over all releases
4
Reduce cost of major releases
Identify release cost
Reduce cost of minor release as compared to previous minor releases
1.     Use following formula to track cost per release

= Major (No. Of release defects filled + No. Of release resources) / Major (No. Of release defects filed + No. Of release resources) + Minor (No. Of release defects filled + No. Of release resources)

2.     Consistently track the output over all releases





Internal processes
Sr. No.
Objective
Measure
Target
Initiative
1.      
Improve test effectiveness
Measure Automation percentage
Improve overall automation percentage release over release to bring more test effectiveness
1.     Use following formula to improve test effectiveness by measuring percentage of automation

= (Automated tests) / (Manual tests + Automated tests)
2.
Improve test effectiveness
Measure manual percentage
Reduce manual efforts close to zero
1.     Use following formula to improve test effectiveness by measuring percentage of manual tests

= (Manual tests) / (Manual tests + Automated tests)
3.
Improve test effectiveness
Defect slippage to internal customers and in production (consider deferred incidents as well)
Zero Defect slippage to internal customers and in production (consider deferred incidents as well)
1.     Use following formula to improve test effectiveness by measuring defect slippage

= (No. Of defects by internal customers) /  (No. Of defects by internal customers+ No. Of deferred defects)
4.
Improve software testing quality
Defect Severity distribution
Zero high severity incidents
1.     Use following formula to improve software testing quality

= (No. Of Sev 1 and Sev 2 defects) / (Total number of defects)
5.
Improve test effectiveness (Operations)
Measure number of parallel qualifications as compared to major/minor releases
Ideal targets could be e.g.
1. Less than Number of major and minor releases combined
2. Less than m
1.     Use following formula to improve test effectiveness by measuring number of qualifications in progress

= (Number of non-release qualifications) / (Number of major releases + Number of minor releases)
6.
Improve documentation
Measure level of completeness, accuracy, simplicity
Improve quality of documentation
1.     Number of new documents per release
2.     Number of new updates per document per release
3.     Percentages of new updates with respect to new features
7.      
Improve software testing quality
Tests passing per build (daily)
Set following targets for percent of tests passing per daily build e.g
1. Less than 10 %
2. Less than 5 %
3. Less than 2%
1.     Use following formula to improve test effectiveness by measuring number of qualifications in progress

= (Number of tests passed) / (Total number of tests planned for execution)
8.      
Improve software testing quality
Features per iteration (or release)

Measure story points per iterations in Agile model
Measure how many features that we ship as a part of each iteration (or release)
1.     Use following formula to measure features in iterations

= (Number of features planned in iteration or release) / (Total number of targeted features)
9.      
Improve on business value deliverables
How early we can deliver a value in the release?
In Agile Model, we have to deliver high value in early sprints.

1.     Use following formula to measure features in iterations

Customer value of feature x= Cx points
Storypoint of a feature x = Sx points
% Business value of feature x = Cx / Sx





Learning and Growth
Sr. No.
Objective
Measure
Target
Initiative
1.      
Resource availability
If accepted for qualification then how much number of qualified resources should be available?
We should have sufficient number of resources available to work on every qualification.
1.     Use following formula to identify trained resources that will work on release qualification

= (Resources to work on qualification) / (total number of available resources)
2.      
Knowledge transfer/ Transfer of Information
How many knowledge transfers (Transfer of Information) are happening within testing organization.
At least 1 KT (ToI) per week within team
1.     Use following formula to identify progress on knowledge transfer in our teams

= (ToI delivered) / (Total number of ToI delivered + Total number of ToI available in pipeline)
3.      
Cross-team collaboration
Measure how much efforts were spent by your team (A) to collaborate with other teams (B)
Team collaboration should be 100%
1.     Use following formula to measure team collaboration,

= (Total efforts spent by your team: A) / (total efforts spent by our team: A + total efforts spent by other teams:B)







As I said above table lists KPIs that can be tweaked to individual needs and but I cannot guarantee their implementation lead to success. 

If you want to suggest correction (I think there is scope of improvement) or suggest additional KPIs in each perspective above then leave a comment below.

Wednesday, April 23, 2014

Speaker at India Cloud Week 2014

Bookmark and Share
Me Standing second from the left

On 22nd March 2014, last month I was invited as a speaker to India Cloud Week 2014 at Pune. This is the second time I was invited by UNICOM learning and again appreciate the invitation.

It was one day conference and the crowd was around 50 (raw estimate) from different companies. The topic that I covered was Journey of data center to Private Cloud. The presentation was listing my view on data center and different components and steps one has to follow to move to Private Cloud environment. The brief agenda of presentation was,
  • Intro to Data Center
  • Why Private Cloud?
  • Moving to Private Cloud
    • Challenges
    • Steps
  • Resilient business services
Here is the link to few of the snips from the conference.

Friday, March 28, 2014

CISCO UCS - A Nice Explaination

Bookmark and Share

I found a very nice video explanation of CISCO UCS on YouTube.com.
Below are the links to those videos,

 Cisco UCS Whiteboard part 1
 Cisco UCS Whiteboard part 2

Hope someone finds it useful.

Friday, February 28, 2014

Test Estimation in Scrum Environment

Bookmark and Share
One of the most important activities in Agile methodology is estimation of user stories. Estimation of user stories forms two parts viz. development estimation and test estimation of these user stories.

I would like to drill more into test estimation of user stories (though similar analogy can be applied to development estimation as well) and one of the methods that can used to arrive at some justifiable figure. The estimation is always provided in terms of story points where each story point corresponds to 1 day of work ,i.e. 6 hours ( = 8 - 2 unproductive hours :) ). Hence 1 story point measures 6 hours of work.

Consider a user story which requires 60 hours of test efforts. Once the feature is developed and it is quite possible that there are defects observed post initial round of testing. Assuming that defect that are generated and fixed consume 15 % of test efforts that were initially planned, it is now required to add these 15 % hours to the initial test efforts ,i.e 60 + 15 percent of (60) = 64 hours. Also post testing these defects it is also safe to assume some more defects could be observed that may further add 10 percent of  ( 15 percent of (60) ) hours  =  0.4 hours.

Hence total test estimate of user story jumps from initial 60 hours to 64.4 hours ( = 64 + 0.4 ). The story point for this user story would be 64.4 hours / 6 hours (refer to second para from the top) = 10.7 rounded to 11.

Hence test estimation for our user story is 11 points or 64.4 hours. This can further be converted into weeks or days based on the unit that one would like to follow.

We have to keep on refining these estimates over each sprint so that our estimates are aligned with actual estimates for upcoming or next sprint user stories.

Is there any other method that can be used to arrive at test estimates? Comments are welcome.

 




Technology