Jump to content

Talk:Software testing/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Introduction issues

Tprosser 14:25, 15 May 2007 (UTC)== Introduction of this page is an insult to our craft == I find the introduction to this page a bit of an insult to the craft of software testing. To state that a good test is one that finds an error is totally misleading and incorrect. Tests can be good if they prove that an error is not present, or if they prove that the system is functionally compliant. To make such a sweeping and incorrect statement only serves to lower the status of wikipedia.

--Bbornhau 12:21, 17 June 2007 (UTC) I have to disagree with your opinion that stating that a good test is one that finds an error is an insult to our craft. I would agree if you we say that this is not the only definition of a "good" test case. As Cem Kraner points out:"There’s no simple formula or prescription for generating “good” test cases. The space of interesting tests is too complex for this." http://www.kaner.com/articles.html (Cem Kaner, "What is a good test case?" [SLIDES] Software Testing Analysis & Review Conference (STAR) East, Orlando, FL, May 12-16, 2003.). So no insult, but "only" not getting the whole picture.

Additionally, I would strongly disagree with the citation to ISO and IEEE standards as providing any form of "Complete" list. A comprehensive list yes but the term complete here is totally inaccurate, there are many people who disagree with these standards. IEEE 829 is a good example of where our field is completely divided. Perhaps it's wrong to attempt to sum up our craft in wikipedia. It's a profession and it's wrong to try to pin it down to such an inane and narrow subset of views.

Finally to try and mix the terms software testing and Quality assurance is so incorrect. It implies that testing is something by which we can assure quality, not so. It's a means of assessing quality.

So why don't you fix it yourself? You seem to know the subject. The whole point of Wikipedia is that it's constantly improved, so why don't you fix it yourself instead of complaining? - Jan Persson

I strongly disagree that software testing is to measure quality of software. Software testing should be a process to verify software quality against required quality. Required quality can be defined in terms of requirements that can be specified in Software Quality Model such as that in ISO/IEC 9216:1991. Measuring software quality is a different topic and would involve qualitifying software quality and comparison between software and industries.
--Francis Law (talk) 01:36, 20 December 2007 (UTC)

I just went across the following sentence "An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (S.Q.A.), which encompasses all business process areas, not just testing", and I felt there can be some modification to this. The usage of "separate discipline" may cause some confusion for the reader. The subject of Software Testing comes into an area where we are looking towards maintaining Quality. This Quality is a comparative term in view of the final expected result(which we derive from the specified requirements). The Software Quality Assurance indeed starts at a stage as soon as the software plan is thought about and for sure testing comes much later in the proceedings. I would think of Software Quality Assurance as an end to end process for a software,which exists for all the time software development is there.Testing is indeed a part of this end to end process.Again, I would make it an important note that kindly do not consider Testing and SQA as different or the same. They just are interlaced. Testing is helping in SQA and SQA helps to test softwares. - Shivam Mishra —Preceding unsigned comment added by 59.160.193.34 (talk) 12:45, 18 February 2008 (UTC)

Regarding "axioms"

The term "axiom" is probably misleading in this context (see Axiom) and should be changed to something more describing, such as "more or less undisputed facts regarding software testing". Further, each statement should be followed by a short justification (since they are not axioms after all). —Preceding unsigned comment added by 134.47.109.183 (talk) 08:57, 18 December 2007 (UTC)

How about (external) links to Ward Cunningham's wiki? His wiki has a lot of software development stuff: patterns, methodology, etc.
I added a bit about test driven code; which indirectly refers to the brunt of the discussion on Ward's wiki.JimD 04:48, 2005 Jan 9 (UTC)

I also added a link to Ivars Peterson and a reference to his book. I feel a little awkward doing so as I'm not trying to endorse it in particular; it just happens to be the resource that I thought appropriate to that juncture.

I'm also too tired to go back through and clean up my edits more and do additional research at the moment; but I feel like the work I did was better submitted, even in rough form, then discarded. So we'll see what the rest of the wikipedian community as a whole makes of it. :) Edit Boldly (In particular I know the Latin is awkward. It sticks out like a sore thumb. Fix it. Please)JimD 04:48, 2005 Jan 9 (UTC)

Removed paragraph on being able to prove things in a court of law. Almost all software companies disclaim liability for buggy software, and except for a few life-critical pieces of software, the prospect of being sued isn't a strong motivating factor. Also, except for life-critical software, most software testing does not seek to eliminate all defects since this is generally too expensive to be worth the cost.

However I did engage in an assignment once, the objective of which was to test whether the software might leave the supplier exposed to an anti-trust law suit. The software processed data feeds containing financial data and the licence required that this data was capable of being processed by competitors' systems. Without using any of these systems we had to verify that the software and documentation didn't breach this licence. Matt Stan 01:04, 10 Jan 2005 (UTC)

Software testing, like software engineering and methodologies, is largely defined by common practices and fashions.



Despite companies disclaiming liability for buggy software, many of those claims have not been upheld in court. Cem Kaner has suggest this a few time. --Walter Görlitz 20:49, 12 Nov 2004 (UTC)

Gamma Testing discussion is off-beat. This might be a better description: http://www.smartcomputing.com/editorial/dictionary/detail.asp?guid=&searchtype=1&DicID=10215&RefType=Dictionary

"Some cynics say..." -- Really, does someone have a reference for this? If not, i suggest deleting entire discussion of gamma testing.

Anomaly: Fault X failure X error X defect

The current version of the article differentiates "fault" from "failure" in a comprehensive way. However, it does not differentiate both concepts from "error" and "defect". I think this is an important observation that has been overlooked so far. --Antonielly 18:31, 16 March 2006 (UTC)

IEEE recommends to use the word anomaly, see here:

":In software testing an anomaly (see IEEE 1044-1993: Standard Classification for Software Anomalies) is everything that differs from expectation. This expectation can result from a document or also from a persons notion or experiences. Also an anomaly can be a feature or an usability problem, because the testobject may be correct regarding the specification - but it can be improved. Another possibility for an anomaly is that a tester executed the testcase wrong and therefore the expected result is also wrong. Like IEEE says, the word anomaly should be favoured instead of e.g. fault, failure, error, defect or problem, because it has a more neutral meaning." --Erkan Yilmaz 15:31, 28 October 2006 (UTC)

Controversy Section, etc.

It's about time we recognized that prominent people in the industry have very different views of testing. I confess, I was tempted to throw this entire software testing article out and rewrite it without the gratuitous references to weak testing folklore such as black box and white box testing-- an idea that has almost no content, conveys no skill or insight, and merely takes up valuable room that could be used to discuss the bona fide skills and techniques of software testing. (If that sounds arrogant to you then my point is proven: there is a lot of disagreement...)

But, in the spirit of wikidom, rather than tear the article up, I added the section on controversy, and I just added the second paragraph which introduces the notion of investigation and questioning as central to testing.

I intend to come back periodically and morph this entry into something I think is useful, unless there's a lot of push back, in which case we should incorporate the controversy into the article, I believe. Or establish new articles where the various schools of thought can play alone.

-- JamesBach

I write software for a career, and have been in the business for 40 years. Everyone who writes software has a responsibility to address assurance that the software will work correctly, but we usually work for managers who have some kind of budget for how much work is justifiable. The most common rule of thumb is "good enough." The methods of testing, that I have used, have evolved over time, based on what I have learned in my profession, and from experience. Some major considerations:
  • Ideally we want to have a model of the produciton data that can test all posibilities and if anything goes wrong, because the software is not yet perfected, then what is damaged is the model, the test data that was copied from the real production data. 99% of the time, having a test data base, or model that is representative of the real data, is an expense that the managers do not support.
  • Before testing software that is to update files, it is smart to make a backup of the files that are to be updated, so that if anything goes wrong, we can recover from the backup.
  • While we are expecting, hoping for certain changes to the data, it is always possible that we will get unexpected updates where not wanted, so we need to have tools that can compare the before and after data to identify exactly what did change as a result of the tests.

User:AlMac|(talk) 08:25, 31 January 2006 (UTC)

Whoever authored/edited this portion of the entry:

"The self-declared members of the Context-Driven School..."

...clearly does not hold that group in very high esteem. Perhaps dropping "self-declared" would minimize the bias of that statement. 66.195.137.2 14:59, 24 March 2006 (UTC)

I, James Bach, wrote that. I am a founder of the Context Driven School. It's not disparaging, it's just honest. We are self-declared. Maybe there's a better way to say, though? User:JamesBach

Test Scripts versus Test Cases

This article perpetuates the confusion between test cases and test scripts. It would be best if someone could point out that common usage is not "test script" but test case and test scenario. A test script is usually used in automated testing such Functional GUI tools (WinRunner, Silk Test, Rational Robot, etc.) and Unit tools (XUnit, ANT, etc.). --Walter Görlitz 14:45, 28 August 2005 (UTC)

I have attempted to address this in the test cases section, but common usage is not test case and test scenario, it is test case and test suite. Scenario tests are not necessarily related to traditional test cases. --Walter Görlitz 18:23, 20 October 2005 (UTC)

I disagree with --Walter Görlitz, with respect to "test script." It is not unusual, before testing, to write out some kind of an outline of the planned test. What will be be checking on, what are we looking for, how will we be able to tell if something went wrong, and what will we do about it. 90% of my tests are on production data, and I have had the good fortune to be able to run them at a time when, if something goes badly wrong, I can recover the data to what it was before the test started. This written outline of the plan of action, and how the data is to be fecovered if the test goes badly, it is a "test script." User:AlMac|(talk) 08:29, 31 January 2006 (UTC)
Disagree all you want. A test script has two definitions and the one you have defined falls into neither of them. You have described a Test plan or possibly a test strategy: how you plan to do the testing. A script could be a written test case or it could be used for automated testing. Feel free to use your derivative form though, but it's not common usage. --Walter Görlitz 23:48, 2 February 2006 (UTC)
Guys, common usage varies with community. In my circle, we use "test script" as a synonym for test procedure, which means a set of instructions for executing a test. It may or may not be automated. That issue is disambiguated in context. I can't speak for the whole testing universe on this, but then again, neither can anyone else. So, maybe if you have a variation you want to talk about, then talk about it. -- User:JamesBach
I use the following definitions. Test Plan - An outline of the approach to test the product, describing in general terms the features and improvements, schedule, resources and responsibilities, test lab, etc. etc. Test Case - A document or paragraph outlining the steps required to query the issue being tested. Test Script - an automated test case executed by whichever application delights you. Test Suite - The suite of manual and automated tests executed against the release. Methylgrace 19:44, 30 August 2006 (UTC)

Regarding the Custodiet Ipsos Custodes section

It was my understanding that a Heisenbug was a defect that occurred only when run under software in release mode, but stopped occurring when in debug mode. The act of observation changed the nature of the application and the defect disappears. --Walter Görlitz 18:23, 20 October 2005 (UTC)

Certification by ISO 9001??

In section 13 there is stated that "No certification is based on a widely accepted body of knowledge.".

Isn't there the ISO 9001 quality assurance certification? If someone knows more details about ISO 9001, it would be cool if he could take a look and change the section of it's needed. Thank you. :)

(For tracking: Don't remove this comment until this question is 100% clear.)

There is an ISO 9000 certification specifically related to computer software quality, and one related to computer security assurance. There are similar systems in other nations, which are in the process of being combined. There has also been legislation in the USA called Sarbanes Oxley or SOX for short, which mandates a process of software change methodology that approves and tests changes. Hundreds of companies have been audited to make sure they are compliant with these standards. User:AlMac|(talk) 08:19, 31 January 2006 (UTC)
The certifications you speak of have absolutely nothing to do with testing skill. In fact, it has little to do with anything that should matter to people who want excellent software testing. Both ISO 9001 and Sarbanes Oxley are simply mechanisms by which large consulting companies suck money from other large corporations. We should be ashamed that our craft is so manipulated. Besides, neither certification has anything to do with a body of knowledge, widely accepted or not.-- User:JamesBach

The real scope of testing

You said: "In other words, testing is nothing but criticism or comparison, that is comparing the actual value with expected one." I must say that TESTING is much more then that. First of all, testing is about improving (ensuring) the quality of a product.

I would add that Testing or Quality Analysis also provides more the comparison of actual and expected. It also provides customer facing/business logic testing to help ensure that the product being created is really meeting the needs of the customer...by meeting the requirements.

The real problem with trying to "define" software testing is that you first need to understand the many aspects of software testing. So far I haven't discovered any articles that attempt to cover these.

I'd say put this under the controversy section, and while your at it, you might as well put the rest of the section there as well. I have to say that the starting page for Software testing was definately not to my liking, and not differentiating between Software Testing and Software Quality Assurance is a bad start.

I agree. That's like claiming that medicine is nothing but comparing observed symptoms to diseases. There's more to both medicine and software testing than that. The fact that you cannot get an absolute guarantee is irrelevant. You can't get absolute certainty out of anything.--RLent 16:39, 20 February 2006 (UTC)
I disagree. I think testing can be about improving the product, but only if you're not a tester. A tester's job is to observe, compare, infer, report, but not to improve anything. Testers who consider themselves paladins of quality get marginalized. That way lies madness, friends. In any case, if you believe that, then you are a represntative of a particular school of testing theory, as am I a representative of a different school. By all means, say whatever you want to say, but don't claim to speak for all of us. I think that's a big problem with Wikipedia. How are we to write entries for controversial subjects? Testing is controversial. We just have to deal with that. User:JamesBach
If any tester where I work ever thought they were just 'completing a test' and not really looking to improve the quality of the product, I would hazard to guess the improvements gained by testing would be minimal. That aside, I'm unclear why you think the concept of 'testing' wouldn't include improving the product, regardless of whether or not a 'tester' is the one doing the improvements. Testing as a concept would invariably lead to improvements, or there'd be no need for testing! Pepkaro 22:42, 2 March 2007 (UTC)
That may be right. I think the most neutral statement might be that tests itself do not improve the quality of the system. They measure the quality of the system, which means it is being compared against something that states how the system should work. Thus, after testing, you have an idea as to how fit your product is. Then again, I find myself reporting defects and tickets precisely because I want to get the error/fault/failure out of the way before the soft goes into prod. This however - to me - is part of the testing process. Testing should not take the responsibility for software quality out of the hands of the developers. It is a very fine line to be drawn, and maybe it is not even adequate, but in my experience, if the quality improvment is left over to the testers, this is not quite as efficient as leaving it with the guys who wrote the code in the first place. Tprosser 12:42, 15 May 2007 (UTC)

Atul here,


Software testing is to check what is expected to do and what is expecetd not to do and at the same time without disturbing the entire software. —Preceding unsigned comment added by 203.196.250.214 (talk) 10:28, 11 February 2008 (UTC)

Software Reliability Engineering (SRE)

This article is weak. SRE is not mentioned. There is no mention of any testing practice designed to measure mean time to failure (MTTF). (The ambiguous Load Testing is mentioned but with no focus on a MTTF goal.)

No mention of the Cleanroom-related controversy where the Cleanroomers advocated SRE as the only testing and no coverage testing.

No mention of the fact that practical coverage testing cannot provide reliability, since it is not practical to cover all the places where a bug can lurk and coverage testing includes no attempt to measure the failure rate of the remaining bugs. (By reliability, I mean a low MTTF.) (BTW, Dick Hamlet proved that perfect coverage testing is equivalent to a formal proof of correctness, in "Partition Testing does not Inspire Confidence")

Need a discussion of the goals of testing are. Need to discuss the strengths and weaknesses of various methods in reaching these goals. Two kinds of goals: Goals for the testing process, like line coverage, and goals for the delivered system, like low MTTF.

Need a discussion of how to determine when to stop testing.

  • Isn't "mean time to failure" more of a hardware than software concept?
  • I have worked for small companies for approx 40 years, where most of the time I report to a manager who is outside of computer expertise.
    • The purpose of testing has been multi-fold
  1. Does the software do that which was requested by whoever it was, outside of computer staff? (me)
  2. Do we see room for obvious imporvements, and have those improvements been made successfully?
  3. Is the software user-friendly and intuitively obvious to operate, such that the risk of human error, using it, as low as we can make it?
  4. When humans enter bad data, does the software catch that in a manner that is easy to resolve?
  5. "The time to stop testing" is when the software is working satisfactorily, collectively we not see how to further improve it, other software projects have become more important to be working on.

User:AlMac|(talk) 08:39, 31 January 2006 (UTC)

I've removed the link to 'testers' from the fifth paragraph of the 'Introduction' section, as it linked to an inappropriate article with no disambiguator. It's difficult to see how a separate section on 'testers' would be justifiable anyway. --Dazzla 15:58, 5 January 2006 (UTC)

Recent addition

Testing analysis can be also measured the bugs reported from site or from customer in the deployed application for the customer requirement. If bugs are releated to functional then it is better to again review the functional test case because may be there is possibility some testing or functional scanrios are missed out.

I may be dim today, but I don't see what this means. It certainly needs to be rewritten in better English. --David.alex.lamb 20:27, 24 February 2006 (UTC)

Beta test(ing)

This might be a little confusing to readers: Beta testing redirects to this article, yet Beta test redirects to Development stage. They should be more consistent, because I happened upon the latter article after omitting the "-ing" suffix out of curiosity. – Minh Nguyễn (talk, contribs) 07:51, 23 April 2006 (UTC)

Copyvio

I've just removed a large section which was copied straight from [1]. If people could keep an eye out in case it gets added back, I'd appreciate it. Shimgray | talk | 16:32, 23 April 2006 (UTC)

I was wondering what peoples views are about what should, and should not, go in the External links section. There are many Open Source test tools, such as Bugzilla and the test case management tool QaTraq which are very relevant to Software testing. As a contributor to one of these tools I personally feel there is nothing wrong with listing these tools in the External Links section.

Personally I feel it's both useful and informative to add links to tools like these in the External Links section. However, before adding a link again I'd like to know if other people consider this type of link useful and informative under the Software Testing article. William Echlin

I would not like to see such links added. Wikipedia is not a vehicle for advertising (regardless of whether or not the thing being advertised is open source). Wikipedia is also not a directory of links. Start adding links for a few tools, and pretty soon it everyone wants their favorite tools added to the list — a list that will soon grow to dominate the article. Style guidance for the use of external links can also be found here.
There are plenty of other sites that provide directory services. Why not just include a few relevant links to such sites, such as the Open Directory Project directory of software testing products and tools, or SourceForge's testing category? These sites are likely to be far more comprehensive than anything that might be added here, and also far more likely to stay up to date. --Allan McInnes (talk) 21:31, 26 May 2006 (UTC)
You make some good points there. I see now that this is not the right place for individual links to tools. Perhaps a single link to OpenSourceTesting.org would be worth considering as well. Thank you for pointing me in the right direction to the 'Style guidance for External Links' too. I had been looking for something along these lines. You make a good custodian of this topic. William Echlin 08:06, 27 May 2006 (UTC)
Thank you. I think a link to OpenSourceTesting.org would be fine. I'll add that, and a few of the directory links I mentioned, to the article. --Allan McInnes (talk) 16:58, 27 May 2006 (UTC)

Quotes

Some quotes were removed and I cannot seem to find the discussion that went with it, nor can I really see the reason or need for removal. Being new to looking at the wiki editing backgrounds I was wondering if this is common practise. If the removal is done based on a single persons view, could I edit it back (and gain little)? The main reason for this question is the quote "Software Testers: Depraved minds, usefully employed.'" -- Rex Black which I found very accurate and recognizable, which was removed. --Keeton 08:19, 11 July 2006 (UTC)

I removed the quotes in question because the quotes section seemed to be getting large (the quote section is largely deprecated on Wikipedia, and is generally supposed to be kept small if it exists at all). The removed quotes were all credited to people who are apparently not even prominent enough to warrant a Wikipedia article (i.e. they were red-links). The removal was a bold move on my part. If you believe the qujotes in question are useful and important, feel free to revert my changes. --Allan McInnes (talk) 14:29, 11 July 2006 (UTC)

Alpha Testing

I don't believe the description of alpha testing concurs with the definition that I understood, and that appears to be backed up by googling, which is that alpha testing involves inviting customers/end-users to test the software on the developer's site. This being the distinction from beta-testing, which involves testing by customers on their own site. Testing by developers/in-house test team is, as I understand it, separate from alpha testing (and ideally done before alpha testing). Can anyone provide authoritative references that support the existing definition? --Michig 09:37, 17 July 2006 (UTC)

Generally Alpha testing is the software prototype stage when the software is first able to run. It will not have all the intended functionality, but it will have core functions and will be able to accept inputs and generate outputs.In-depth software reliability testing, installation testing, and documentation testing are not done at alpha test time, as the software is only a prototype.
Digitalfunda 11:27, 26 September 2006 (UTC)
Perhaps the confusion is between alpha versions and alpha testing. The difference between Alpha and Beta testing is how the testing is carried out, and not necessarily how 'complete' the software is (though of course, later stages of testing will generally correspond to software being more complete). I have found several authoritative sources that describe alpha testing as testing by customers on the developer's site, and none that describe it as initial testing by developers/test staff. --Michig 12:41, 26 September 2006 (UTC)
I'm sorry. Where does Googling back you up? Alpha testing is never performed by the user. If someone has written a web site that indicates that, they're using their own definition. If a customer or client touches the software for the purposes of testing, it is a beta test. Don't forget, this testing is done in the beta phase of development and that is why it's called a beta test. --Walter Görlitz 07:31, 13 October 2006 (UTC)
Googling is a way of finding information. I am not claiming Google as a definitive source, I am merely stating that by searching on Google I found a lot of definitions of alpha testing that agree with the one that I understood. The IEEE SWEBOK defines alpha and beta testing as follows: "Before the software is released, it is sometimes given to a small, representative set of potential users for trial use, either in-house (alpha testing) or external (beta testing). These users report problems with the product. Alpha and beta use is often uncontrolled, and is not always referred to in a test plan." [2]. Personally, I would say that this is quite an authoritative definition, and if this is 'their own definition' then that carries some weight with me. I haven't seen one authoritative source that concurs with the current definition in the article. If there are any, it would be useful if you could cite them. --Michig 08:13, 13 October 2006 (UTC)
I'm not asking what Google is. I'm asking for actual sources from Google that back you up. --Walter Görlitz 16:19, 14 October 2006 (UTC)
...which I have provided, so what's your point?--Michig 16:34, 14 October 2006 (UTC)
OK. I'm blind. You provided one link and that is the sum of the Google links that back your non-standard definition?. OK. Seems simple: alpha is in-house. Beta is after that where you can open tetsing up to the clients or customers. --Walter Görlitz 18:58, 15 October 2006 (UTC)
Actually if you read the whole section, I have provided 3 references to formal standard descriptions that disagree with your definition, and DRogers has provided another. I found many more links from Google that backed up these definitions, but on-line dictionaries and technical websites are of variable quality (and several just take their definitions from Wikipedia) so I chose not to include them. You have provided one reference to an informal definition from one of the many books on the subject and one that doesn't appear to refer to alpha testing at all. I have cited the British Standard, the standard definition used by the ISTQB, and the standard definition from the IEEE. All of these are standard definitions. The fact that I 'know' these to reflect what alpha testing means and that you apparently 'know' these to be wrong is irrelevant - refer to Wikipedia:Verifiability.--Michig 19:20, 15 October 2006 (UTC)
For what it's worth, my SQE training book for the ASTQB's CTFL test says that alpha testing is "Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site, but outside the development organization. Alpha testing is often employed as a form of internal acceptance testing." Walter, just out of curiosity, where did your definition come from? A formal source, or experience? DRogers 12:35, 13 October 2006 (UTC)
From common usage. From Testing Computer Software, Kaner et. al, where it doesn't talk about alpha testing but alpha milestones. From Lessons Learned in Software Testing Kaner, Bach, and Pettichord, p. 34: "Alpha testing. In-house testing performed by the test team (and possibly other interested, friendly insiders)." It goes on to add a definition of Beta testing immediately afterwards. Your SQE book is not using common industry standards as alpha testing is only done by insiders not users. They may mean to say that you are simulating user or customer activity, but it is certainly not performed by users or customers. Even in Agile development, where the concept of alpha and beta a quite blurred, the stakeholder would not get the software mid-iteration. It would only be delivered at the end of an iteration. Alpha is in-house. Beta is when the product an be disseminated to a wider audience. --Walter Görlitz 16:19, 14 October 2006 (UTC)
I think that SQE tries to find convergence on the definition. It mentions that alpha testing is performed by potential end users or an independent test team. Where I work, it is an independent test team. But I've seen some open source software that allows the download of an alpha version, which implies that I, a potential end user, am testing their software. So I think either test teams or users can do the alpha testing, and that SQE is just trying to give a definition that's generally satisfactory. DRogers 17:11, 14 October 2006 (UTC)
Convergence? The definition is just plain inconsistent with the rest of the industry. That's not convergence. It's re-casting a definition that has some common understanding already.
Also, using open source to define alpha phases is like using miniature aircraft hobbyists to define aircraft design definitions and policies. Open source's motto is "release early. release often" this is one reason why they give their software out during alpha phases. The other reason is obvious: they don't have the formal testing that a commercial product would have. Large open source projects (Mozila's Firefox, OpenOffice, Bugzilla, are three immediate examples that spring to mind) have three streams: nightly builds (alpha projects), development streams (betas) and stable (the equivalent of released software. Open source is not a good example.
I don't mind if altering the article to indicate that small open source projects and SQE have a definition that diverges from the rest of the industries definition, but I do mind the dilution of the definition. --Walter Görlitz 19:06, 15 October 2006 (UTC)
Yeah..that's correct however different companies take ALPHA testing differently..I have been in this domain for last 4.5 yrs..and i have seen both alpha versions and alpha testing, as you have confirmed this in this case I would like you to add the details regarding alpha testing so that everyone can be benifitted.Digitalfunda 06:05, 27 September 2006 (UTC)
If this is true, it's sad. It's one of the few definitions that seemed to be universlly held across all companies. --Walter Görlitz 16:19, 14 October 2006 (UTC)
I think we all just need to accept that the term can be used to cover testing that is always independent of the developers but can be carried out by either an independent test team or by customers, and is done on the developer's site. At least as far as the UK is concerned, the official definition from the British Computer Society (and British Standard) is: "Simulated or actual operational testing at an in-house site not otherwise involved with the software developers." BS 7925-1. British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST). Actual operational testing surely means by users. I really think the article (and the one on alpha testing) should be worded to cover all definitions from accepted standards and others that are commonly used.--Michig 16:34, 14 October 2006 (UTC)
Here's another definition, this time from the International Software Testing Qualification Board's Standard Glossary of terms used in Software Testing: "Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing." Recent changes to the Software testing article now broadly concur with this description, but Development stage (redirected from alpha test) doesn't.--Michig 16:52, 14 October 2006 (UTC)

Levels

Regression testing can be performed at unit, module, system or project level.

What is the difference between unit level and module level? (and may someone explain system level and project level as well?) Thanks, --Abdull 14:34, 27 July 2006 (UTC)

Qualification

What does the qualification of a product actually mean? Does it install, function with the new product? I am currently trying to qualify a product with SQL 2005 and want to broaden my scope.

Skills

Hi, I added the category skills, since software testing should be done by skilled, trained people. See here what some authors tell about skills in software tetsing field (with videos). To distinguish with certifications: certifications do not necessarily mean that a person working in the testing field is also skilled. --Erkan Yilmaz 15:25, 28 October 2006 (UTC)

Certification

This section seems to be the subject of controversy recently. While the statement "No certification currently offered actually requires the applicant to demonstrate the ability to test software" is true of probably most of the available certifications (and the same criticism could be levelled at many areas of IT certification), the ISEB Practiotioner-level certification would appear to be different, as according to the BCS website "The Practitioner Certificate is for experienced testing practitioners. This certificate demonstrates a depth of knowledge of testing topics and the ability to perform testing activities in practice." Are they just making this up, or does the article need to be changed to reflect this certification? --Michig 10:17, 5 September 2006 (UTC)

We need to change this article to a certain extent as these days certifications like CSTE are becoming a must if you are in the field of software testing.
I have updated this section with list of popular certification.Digitalfunda 06:22, 27 September 2006 (UTC)
I know for ISTQB Certified Tester Advanced Level, that there is needed professional work experience before you can do Advanced Level, but for Foundation Level no premises are necessary. More info you can find on websites: [3] [4]
--Erkan Yilmaz 02:08, 30 September 2006 (UTC)
Digitalfunda, I think that with your updates, Certification deserves its own section. It doesn't make sense anymore to have it in the Controversy section. DRogers 14:48, 3 October 2006 (UTC)

I have added a new article on CSTE lets c the response it gets, discussion page is open for peer reviews.Digitalfunda 03:52, 13 October 2006 (UTC)

see my edits on CSTE, hope they help you, to improve it. :-) --Erkan Yilmaz 16:33, 12 October 2006 (UTC)


also divided the certification into: Exam-based and Education-based. This is from: Dr. Magdy Hanna, IIST Chairman & CEO (2006): The Value of Certification for Software Test and Quality Professionals --Erkan Yilmaz 16:33, 12 October 2006 (UTC)

removed Certification from Controversy section, I dont remember any controversy surrounding them Digitalfunda 03:52, 13 October 2006 (UTC)

Code Ceverage

I think the section on code coverage is worthwhile information. But maybe it could be moved either to the code coverage article, and this article could still link to it, or it could be moved to the white box testing article. Any input? DRogers 14:41, 3 October 2006 (UTC)

Regression testing

Hi all from my point of view i can say when there is a small change is occured in a module then wht we r doing when we r comparing the modified one with the prvious one to check whether the modified/changed module having any ill effect on other module or not.That is called regression testing. If some one having any idea abt the same then plse let me know.

S/w Engg:-Sunil kumar Behera (Kuna)

Gray box testing

I'd like to see a better explanation of "seeding the database". I also disagree with the sentence "It can also be used of testers who know the internal workings or algorithm of the software...". Doesn't that describe white box exactly?

Also, how do I suggest a cleanup of the Talk page? DRogers 16:52, 5 October 2006 (UTC)

How would you like "seeding the database" explained? Also, you can disagree with that statement but it does not describe a white box tester. A white box tester has access to the code. A grey box tester does not. They only know the algorithms or the "internal workings". If you follow the "box" analogy, if you knew that there were levers and gears, as opposed to pneumatic actuators and hoses, inside the box but did not know their placement, you would be a grey box tester. A grey box tester can only attack the system through the UI or an API whereas a white box tester has the option to set break points or edit variables while the application is in use. --Walter Görlitz 07:28, 13 October 2006 (UTC)

Gray box testing

Hi if i can say in a single word then i can say Gray box is a combination of both black box and white box little knowledge in coding is the gray box testing.


Sunil Kumar Behera —Preceding unsigned comment added by 59.145.150.153 (talk) 12:08, August 30, 2007 (UTC)

RE: Editorial Viability Assessment

Software is defined by a specification -- unless its hacked. Sofware test and measurement consists of stimulus against the product specification to determine defects in the embodiment. If that's not an editorial function, I don't know what is. Perhaps you object to the abstraction? Perhaps you object to the application of pre-checkin evaluation to preclude regression introduction? I've been in the software test engineering game for over 10 years now. The technique I outline has been successfully applied in very large and very small software factories. That curriculum does not exist previously describing it should not interfere with discussion about effectiveness.

Can the material be moved somewhere? I suppose so. But where does it belong? Seems appropriate to outline additional test engineering techniques under the subject of software testing. --Rmstein 19:37, 5 October 2006 (UTC)

I would say that it belongs in an article of the same name if anywhere at all. The Software Testing article is already getting too large and would be much better just giving an overview of the main aspects of software testing, and I don't believe Editorial Viability Assessment is sufficiently notable to be in the main article. The (large) image isn't particularly helpful, either.--Michig 19:49, 5 October 2006 (UTC)
Given my experience, I'd say I'm still at the introductory level. I haven't come across this terminology in this industry yet. That tells me that, like you said, maybe this is at the wrong level of abstraction. So maybe it belongs in its own article. But I'm not sure. Can you cite some sources or list some references, external links, etc.? DRogers 20:01, 5 October 2006 (UTC)
I ripped out the content and stuck it in a separate page, placing a link to it in the software testing page. Appreciate the feedback. I have not encountered any technical discussion of this abstraction in the literature. One sees lot about test-driven development, agile techniques, etc. which may or may not mirror the pre-checkin/post-integration evaluation technique I discuss. Depends on the software factory size primarily -- fewer talented authors are preferred to many mediocre authors. I've applied this methodology in large and small factories to stabilize releases. If the bits aren't stable, you've got a software toxic wastedump to manage. An editorial technique saves a little on kitty litter. A lot of shops seem to churn and burn their customers, products, and organizational participants. Save for the product monopolies, a broken software factory that cannot embrace a proactive means to enforce continuous release-ready bit maintenance is usually doomed to the bit-bucket (in a globalized economy at least).-- Rmstein 12:24, 6 October 2006 (UTC)

Exploratory vs. Scripted

Did you mean to say Exploratory versus Systematic software testing or even better yet, call Exploratory by its real name ad-hoc testing? The word scripted as it pertains to software testing is the pass tense of writing an automated script or test case. When explaining or describing Madonna life choices in Wikipedia I could understand the use of the word misunderstood. As sergeant Friday said in the TV show just the facts ma’am just the facts.

MichaelDeady 22:07, 9 October 2006 (UTC)


Both exploratory and scripted approaches can be systematic. The approach I choose (exploratory or scripted) often has little to do with how systematic I am, and has more to do with how I want to structure my work and how much freedom I want as a tester. Just because I write test cases down (scripted) doesn't mean I'm methodical in my test design, coverage, or execution. It just means I wrote my tests down. Likewise, just beacuse I say I'm doing exploratory testing, it doesn't mean I'm methodical in my test design, coverage, or execution. How systematic you are is not dictated by the approach.

Some exploratory testing is ad-hoc, but not all ad-hoc testing is exploratory. There are specific skills and tactics that occur in exploratory testing that may or may not appear in ad-hoc testing: modeling, resourcing, questioning, chartering, observing, manipulating, pairing, generating and elaborating, overproduction/abandonment/recovery, refocusing, alternating, collaborating, branching and backtracking, conjecturing, recording, and reporting. For descriptions, do a Google search on 'exploratory testing dynamics' and read a little about what ET is and how people actually do it.

-Mike Kelly

If the term exploratory can be used to explain ad-hoc testing the same rules applies that a person may perhaps say that scripted testing could be called by is proper name of Systematic Testing. You could also call what I do as exploratory when I write test plans, cases, and risk assessments. But when I place the word “exploratory” in the aforementioned context it mean a over all saving’s of time and money

I just wanted to point out the in corrected use of the word scripted with a little bit of flare. I believe the correct statement should be along the lines of “Exploratory vs. Systematic”. Both methodologies have there good and bad points.

Just as you had stated above exploratory goes much farther into process then just saying ad-hoc the same can be said about Systematic when using the word scripting to describe over all processes. “Systematic Test” is unique in that it defines a test to include more than a procedure performed after the system was fully assembled. Systematic testing includes test planning, test case design, test implementation, and test execution. The key of systematic testing is with the intention of time and effort exerted on fixing problems is sharply decreased by early detection. But more importantly, the test process helps to put all the issues on the table so that fewer open items remain in the later development stages.

Just as if we where writing white paper’s it really just boils down to power words and semantics’

MichaelDeady 15:53, 11 October 2006 (UTC)


Michael, thank you for your reply. It's much more clear where you are coming from then your first post.

1) "If the term exploratory can be used to explain ad-hoc testing..."

Who said, the term exploratory can be used to explain ad-hoc testing? I'm not saying that. I'm saying they are two different things.

2) "...called by is proper name of Systematic Testing."

Who said, this is it's proper name? Are you making an appeal to an authority here and if so, whose?

3) "You could also call what I do as exploratory when I write test plans, cases, and risk assessments."

Perhaps. Exploratory testing is simultaneous learning, test design, and test execution. It's having the new information you learn, affect the very next test you run. If you are only /writing/ your test plans, cases, and assessments, then I don't know how you could be exploratory. There is no execution there, just documentation. Are you saying that you are exploratory in writing something down? If so, how? What new information are you learning when you document without the feedback of test execution?

4) "...the same can be said about Systematic when using the word scripting to describe over all processes."

Ok, so here perhaps is the problem. When I say scripted, I simply mean "prepared ahead of time" or written down. It's not exploratory, because it's not influenced by what you learned. I'm not talking about how systematic you are at all. To me, that's lab procedures, not your approach to testing.

5) "Systematic testing includes test planning, test case design, test implementation, and test execution."

I might agree with that. That could be what "Systematic testing" includes. There might be more or less, but the idea is, it's the steps we take when testing. How is that not covered in exploratory testing. When I'm doing my exploratory testing, I'm planning my work (chartering), I'm designing my tests (modeling, conjecturing, chartering), I'm implementing and executing my tests (manipulation, observation, recording, etc...). Where an I not doing those systematic things in ET?

6) "Just as if we where writing white paper’s it really just boils down to power words and semantics"

Based on your second post, I think we may not be so far apart. I just wonder if you've actually done any exploratory testing. You don't talk about it like you have and perhaps that's why you don't think it can be systematic. If you have, and your experience is that it's not ever systematic for you, then let's talk about that. I think that would be an interesting place to start.

- Mike Kelly

Roles in software testing

Hi everybody, after getting a friendly remind by Pascal.Tesson I added here the roles of software testing, since the term software testers leads here. The phases and goals of testing can be seen in:

Gelperin, D., and Hetzel, B. (1988): "The Growth of Software Testing," CACM, Vol. 31, No. 6

The roles of software testers I have from:

Thomas Müller (chair), Rex Black, Sigrid Eldh, Dorothy Graham, Klaus Olsen, Maaret Pyhäjärvi, Geoff Thompson and Erik van Veendendal. (2005). Certified Tester - Foundation Level Syllabus - Version 2005, International Software Testing Qualifications Board (ISTQB), Möhrendorf, Germany. (PDF; 0,424 MB).

I am thinking of adding also the phases of softwrae testing here - let's see, where I can add them at best. Searching... --Erkan Yilmaz 17:37, 12 October 2006 (UTC)

History of software testing

So, added one viewpoint of the history. This is from:

  • Gelperin, D., and Hetzel, B. (1988): "The Growth of Software Testing," CACM, Vol. 31, No. 6
  • also a summary can be found on the first in: Laycock, G. T. (1993): "The Theory and Practice of Specification Based Software Testing," University of Sheffield Department of Computer Science

What do you think, should we add one of these into the references? --Erkan Yilmaz 17:50, 12 October 2006 (UTC)

We should probably add both, no? DRogers 20:25, 12 October 2006 (UTC)
works for me DRogers, will add both. added - the 2nd is also published free by Laycock :-) have fun reading
--Erkan Yilmaz 21:12, 12 October 2006 (UTC)

training

are there any colleges or universities that train people for software testing? —The preceding unsigned comment was added by 194.70.181.1 (talk) 17:43, 19 December 2006 (UTC).

Hello person with IP 194.70.181.1,
since you do not specify, what training you want/need, there are so many ways to participate in testing and especially in software testing.
Why don't you join an Open source community?
There you have the chance to participate in SDLC, you have developers, testers, ... you learn the tools,...etc.etc.
That would be one start where you really learn and do not have just theory.
But my friend, why don`t you use google as info source?
To finally answer your question: have a look here at Scott Barber's comment. --Erkan Yilmaz 18:23, 19 December 2006 (UTC)

The references to "functional testing" are broken 209.60.62.194 20:04, 15 January 2007 (UTC)Tim Lisko209.60.62.194 20:04, 15 January 2007 (UTC)

== Does anyone know what the industry standard is for bugs being raised that are 'not a fault'? i.e. what percentage are NAF on average? Does about 20% sounds right?

History

Hi,

Where is the reference in regards to the history paragraph, specificaly;

Drs. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing as follows: until 1956 it was the debugging oriented period, where testing was often associated to debugging: there was no clear difference between testing and debugging. From 1957-1978 there was the demonstration oriented period where debugging and testing was distinguished now - in this period it was shown, that software satisfies the requirements. The time between 1979-1982 is announced as the destruction oriented period, where the goal was to find errors. 1983-1987 is classified as the evaluation oriented period: intention here is that during the software lifecycle a product evaluation is provided and measuring quality. From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults.

Im doing research on Software Testing and I would like to back this up outside Wikipedia, but im not finding others quoting the same thing (apart from one site, which copied and pasted the above into their page) or even similar.

Hoping you can help..

Sully 01:04, 22 March 2007 (UTC)

Call for cleanup of History section

The dates in a seemingly random order (I think as a result of vandalism but I haven't checked ==History=='s history to verify this). I agree with Sully that the statements are unsourced. The section generally doesn't read like part of an encyclopedia entry. Lumpish Scholar 13:15, 18 May 2007 (UTC)

Priya4212's edits

Here's why I removed your edits:

  • The paragraph on grey box testing I think can be boiled down to one or two points, which I've put into the preceding paragraph.
  • Unit testing is neither basic, nor preliminary. I wouldn't call it "basic frame" since it tests in detail the guts of the system. And it isn't "preliminary" since the unit test suite should be maintained throughout the life of the project.
  • Integration testing doesn't in itself track down data dependency issues. Troubleshooting does, but that's a different beast. Integration simply tests the integration of the units.
  • With functional testing, "User demands" and "domain related demands" seem to be non-standard terminology. Furthermore, I'm assuming that they're other words for "requirements". And I think that functional testing is usually a bit more detailed than that.
  • Stating that Acceptance testing usually covers the User Acceptance Tests seems a bit redundant to me.
  • The paragraphs describing the stages of the bug I thought were either incorrect, or too specific. For example, bugs can be classified in innumerable ways; not just the four you mention. And the names of statuses varies from company to company, or even project to project. I think including this detail could mislead the uninformed to believe that what you state is generally accepted fact. But I think that what you state is specific to your experience.

Does anyone disagree with my changes?DRogers 14:00, 22 March 2007 (UTC)

Not everywhere. But these are my comments to the topic
  • Functional testing usually covers the functionality of the system. What is meant by functionality is adequately covered by ISO 9126, and except for security, it pretty much boils down to ensure whether or not you're building the right system. Also note that functional testing can occur during the unit tests, the integration tests and the system test. So although it might be more detailed, one essentialy validates the requirements. What that means in detail is dependent on the test stage.
  • Integration testing is a test stage where you make sure that different components work together, which covers interfaces and data flows naturally. Again, there is functional and non-functional integration testing, both of which should be carried out. The integration strategy is crucial since you have to program subs or drivers for not yet finished components, which again touches data dependency. So here, I disagree.
  • The goal of UATs is to show the User that this is what could be made of his/her order. Normally it covers functional tests which are accepted as a valid means of showing a system's suitability. So I think I reread the section and just edit that one in.
That's all from my side for now. Tprosser 14:25, 15 May 2007 (UTC)

Levels of testing/Test Levels

The 'Levels of testing' subsection and the 'Test levels' section need to be merged. I'll do this at some point in the next few days if nobody else has.--Michig 15:30, 22 June 2007 (UTC)

Done.--Michig 09:32, 26 June 2007 (UTC)

Functional vs. non-functional?

There is inconsistency on functional vs. non-functional testing and how they divide system testing. Currently, Functional Testing redirects to System testing which names types of tests that are also referenced in Non-functional tests. I am removing the functional/non-functional bullet points for now because it's confusing. Mikethegreen 14:08, 6 August 2007 (UTC)

I'm not sure why there's so much confusion about this. Functional testing tests functionality, i.e. the product is supposed to let me do this, but does it? Non-functional testing is everything else, i.e. performance, security, etc. And both of these typically fall under the black box umbrella. System testing tests the whole system. Some of these tests are functional, some non-functional. Maybe the confusion is because these are all different ways to view the same bunch of things, and not different things all together. DRogers 12:26, 8 August 2007 (UTC)

Interwikis

In the german version we found an interwikilink from acceptance testing and software testing to the german (adopted from the english word) translation Akzeptanztest resp. Softwaretest which is now handled in a complex article about system testing within a definition section. So to come back to the english article you may be forced to step back to the redirect! For english peope that should not be necessary, but whats with the other interwikis supposedly installed because of translations from the english version? --SonniWP 09:58, 22 August 2007 (UTC)

Component Testing

Is Component testing analogous to Unit Testing as "each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented"? If so would like to suggest Component testing, which is orphaned, is merged with Unit Testing. —Preceding unsigned comment added by 87.80.105.132 (talk) 21:55, 3 September 2007 (UTC)


"See also" Part

Long references lists are generally useless if they are not sorted because readers will certainly not click through all references and cannot a reference they like to continue reading without additional information on the subject. I suggest sorting the list using categories such as "Software Development Paradigms" (which emphasize testing), "Testing Methods" (Unit Tests, Smoke Tests), "Component Testing" (GUI Testing, Functional Testing) or "Formalized Testing Approaches" (Formal Verifikation, IEEE). —Preceding unsigned comment added by 85.178.140.15 (talk) 16:57, 16 September 2007 (UTC)

Removed Software Testing Axioms

I've taken the following to the talk page. If these are axioms (and I'm dubious about the last two) then they will be written down somewhere.

==Software Testing Axioms== # It is impossible to test a program completely. # Software testing is risk based exercise. # Testing cannot show that bugs don't exist. # The more bugs you find, the more bugs there are. # Not all the bugs you find will be fixed. # Product specifications are never final.

JASpencer (talk) 20:24, 5 January 2008 (UTC)

Removing NMQA

I've removed this link to the talk page:

*[http://www.nmqa.com Software Testing Specialists - NMQA]

If there is anything of interest within the website feel free to reinstate it linking to that particular page. JASpencer (talk) 21:34, 21 January 2008 (UTC)