IJSTR

International Journal of Scientific & Technology Research

Home About Us Scope Editorial Board Blog/Latest News Contact Us
0.2
2019CiteScore
 
10th percentile
Powered by  Scopus
Scopus coverage:
Nov 2018 to May 2020

CALL FOR PAPERS
AUTHORS
DOWNLOADS
CONTACT

IJSTR >> Volume 2- Issue 4, April 2013 Edition



International Journal of Scientific & Technology Research  
International Journal of Scientific & Technology Research

Website: http://www.ijstr.org

ISSN 2277-8616



A Novel Framework For End-To-End Automation Testing

[Full Text]

 

AUTHOR(S)

Praveen M Bidarakundi, Raghavendra Prasad S.G

 

KEYWORDS

Index Terms: - Class Testing, Differential Unit testing, End-to-End Automation of Unit Test, Framework for unit test, Object Oriented programs testing, Unit Testing, Integration Testing.

 

ABSTRACT

Abstract: - Often implementation of the program will change .Implementations are changed to reduce running time and/or to reduce memory consumption (space complexity) of the program. Often there is need to test the two version of the software, one current and another newer version. Newer version will be having some extra methods/functions, but the remaining methods/functions will be same as that of current version. We need to make sure that these methods of current version have not been affected by the changes done in new version. (Regression Testing), and also often the methods will be refactored to different prototypes/signatures to offer abstraction. These new prototyped methods (signature changed methods) will in-tern invoke the previous method (before to new prototyping), i.e. newer prototyped methods are just wrappers around the previous methods. For instance APIs are wrapped around by corresponding methods .In this situation, it becomes important to test the newly prototyped methods that are wrappers around the old methods/APIs, as we need to verify the correct bindings/mappings of the older and newly prototyped methods. Usually developers write the unit tests to test their logic. But the testers cannot write them as tester lacks the knowledge of logic implemented, and tester may not have any knowledge of coding, but tester knows what each method does and what is it's expected behavior/return type. Thus we need to offer new way to test each method. We propose a novel framework, which addresses these important issues. Framework takes three input parameters namely, class to be tested, variable initialization values (test data), and expected results. From this information, framework automatically builds test driver class at runtime; on the fly on running the framework. Test driver class is used to test the class under test. This test driver class is compiled and executed to get the actual results for class under test. These generated actual results are compared with expected result to find methods different behaviors. Methods whose actual result is not matching with the expected result, then this implies that methods have different behavior, thus the test is failure, and else test is successful.

 

REFERENCES

[1] Yvan Labiche, Integration Testing Object-Oriented Software System: An Experimental-Driven Research Approach, Electrical and Computer Engineering (CCECE), 2011 24th Canadian Conference,IEEE CCECE 2011 – 000652. 8-11 May 2011, Page(s): 000652 - 000655.

[2] Tao Xie,Kunal Taneja,Shreyas Kale and Darko Marinov,Towards a Framework for Differential Unit Testing of Object-Oriented Programs,Second International Workshop on Automation of Software Test (AST'07)-0-7695-2971-2/07,IEEE Computer Society, 2007.

[3] W. M. McKeeman. Differential testing for software. Digital Technical
Journal of Digital Equipment Corporation,10(1):100–107, 1998.

[4] R. A. DeMillo, R. J. Lipton, and F. G. Sayward. Hints on test data
selection: Help for the practicing programmer. IEEE Computer,
11(4):34–41, April 1978.

[5] A. Groce, G. Holzmann, and R. Joshi. Randomized differential testing as a prelude to formal verification. In Proc. 29th International Conference on Software Engineering, 2007

[6] R. L¨ammel andW. Schulte. Controllable combinatorial coverage in grammar-based testing. In Proc. 18th IFIP International Conference on Testing Communicating Systems,pages 19–38, 2006.