Wednesday, October 6, 2010

Types of Testing

Types of Testing:

Unit testing: it is a white box testing conducted by developer. It is used to test most micro scale of testing to test particular function or code modules.

Unit: Smallest testable piece of software.
Unit testing done to show that the unit doesn’t satisfy the functional specification and /or its implemented structure doesn’t match the intended design structure.

Smoke Testing: it is used to test or validate the every basic functionality of the system.

Or

Smoke TestingIt is an initial type of testing once testing team gets a build needs to do the BVT (Build verification testing) or smoke testing for verifying the major functional component of the build. This has to be done based on the requirement.

Functional Testing: It is used to test each and every module is functionally stable entire module is tested by functional testing.

Regression Testing: testing to ensure that code changes have not had an adverse effect to the other module or an existing function.

Re -Testing: Testing in which one will perform testing on the same function again and again with multiples sets of data in order to come to a conclusion whether functionality is working fine or not.

Integration Testing: is the phase of software testing in which individual modules are combined and tested as a group. It follows unit testing and proceeds system testing.
Different types: big bang, top down, button-up.

System Testing: System is a big component.
Testing that attempts to discover defects that are properties of the entire rather than of its individual components.
Concern: Issues, behavior that can only be exposed by testing the entire integrated system (e.g. performance, security, recovery).

Stress Testing: Testing system functionality while the system is under unusually heavy or peak load. It is carried out at high stress environment. This requires that you make some prediction about expected load levels of your website.

Load Testing: Testing an application under heavy load such as the testing of a website under a range of loads to determine at what point the system response time degrades or fails.

Performance Testing: is conducted to indentify the operating condition where system exhibits the best response time.

Volume Testing: is done to find weakness in the system w.r.t its handling of large amount of data during short time periods.

Usability Testing: Usability means that system are easy and fast learn, efficient to use, easy to remember, cause no operating errors and offer a high degree of satisfaction for the user.
Usability means bringing the usage perspective into focus the side towards the user.

Security Testing: Testing how well system protects against unauthorized internal or external users will full damage etc… This type of testing may requires sophisticated testing techniques.

Alpha Testing: Testing of an application when development is nearly completion minor design changes may still be as result of such testing.
Typically done by end user or other not by programmers or testers.

Beta Testing: Testing when development and testing are essential completed and final bugs and problems need to found before final release. Typically done by end-user or other not by programmers.

UAT: Software is handed over to the user in order to find out, if the software meets the user expectation and work as it’s expected. This is the final testing conducted before the customer accepts the products.

Type of Architecture

Client server application/Web server application

Points: It is a network architecture which separates a client from a server.

• The client software can send request to a server.
• Server includes web server, application server, file server, terminal server and mail server.
Ex: if you are browsing an online store, your computer and web browser would be considered a client and the computer, data and application that make up the online store would be considered the server. When your web browser request a particular page from the online store, the server finds all of the information required to display the article in the database, assembles it into a web page and send it back to your web browser for you to look at.

Characteristics of a Server:
• Passive (slave)
• Waits for the request
• Upon receipts of request, processes them and then server’s replies.

Characteristics of a Client:
• Active (master)
• Sends request
• Waits for and receives server replies

Servers are two types,
• State full server and Stateless

A State less server does not keep any information b/w requests
A State full server can remember information b/w requests.
Ex: An HTTP server for static HTML pages is a stateless server,
While Apache Tomcat is an example of a state full server

The interaction b/w client & server are often described using sequence diagrams. It is a standardized in the UML (Unified Modular Language).

Peer to Peer Architecture: Is another types of network architecture because each node or instance of the program is both a client and a server and each has equivalent responsibilities.

Tiered Architecture: Generic client/server architecture has two types of nods on the network client and server. This is called as “two-tier architecture”.

Some network will consists of three different kinds o f nodes.

Client, application server, which process data for the client and data base server which store data for the application server. This is called 3 tier architecture.

Advantages of n-tier architecture with two tier architecture
1. It separates out the processing that occurs to better balance the load on the different server.
2. It is more scalable

Disadvantages:
1. It puts more loads on the network.
2. It is much more difficult to program and test software than in two-tier architecture because more devices have to communicate to complete a user’s transaction.

Thin client Vs thick client:
Thin client (sometimes also called lean client) is a client computer or client software in client server architecture network which depends primarily on the central server for processing activities and mainly focus on convening input and output b/w the user and the remote server.

Thick client or fat client server does as much processing as possible and passes only data for communications and storages to the server.

Environment:

Environment is a combination of 3 layers:
• Presentation Layer
• Business layer
• Data base layer

Types of Environment:
There are 4 types of environments:
1. Stand alone environment/ one-tier architecture
2. Client server environment/two-tier architecture
3. Distributed environment/ n- tier architecture

1. Stand alone environment (or) one tier architecture:
This environment contains all the three layer i.e. presentation layer, business layer, and data base layer single tier.

2. Client server environment or two tier architecture:
In this environment two tiers will be there one tier is for client and other tier is for database server. Presentation layer and business layer will be present in each and every client and the database will be present in database server.

3. Web environment: In this environment three tiers will be there client resides in one tier, application server resides in middle tier and database server resides in the last tier. Every client will have the presentation layer, application server will have the business layer and database server will have the database layer.

4. Distributed environment: it is same as the web environment but the business logic is distributed among application server is order to distribute the load.

Web server: It is software that provides web services to the client

Application server: It is server that holds the business logic.
Ex: Tomcat, web logic, web spear etc…

Comparison with the MVC architecture:
MVC – Model view controller

A fundamental rule in three tier architecture is the client – tier never communicates directly with the data tier in a three – tier model. All communication must pass through the middle ware tier.
The three tier architecture is linear. MVC is a triangle the view sends update to the controller, the controller updates the model and the view gets updated directly from the model.

Web development usage:
Three tiers are often used to refer to web sites.
1. A front end web servers serving static content.
2. A middle dynamic content processing and generation level application servers for ex: java EE platform.
3. A back end database, comprising both data sets and the database management system or RDBMS software that manager and provides access to the data.

Application and Web Servers:
Web server serves pages for viewing in a web browser, while an application server provides method that client application can call.

A web server exclusively handles HTTP request, whereas an application server serves business logic to applications programs through any number of protocols.

Testing Technique

Types of Testing Techniques:

1. Equivalence Partitioning
2. Boundary Value Analysis
3. Error Guessing.

Equivalence Partitioning: Dividing the input domain into classes of data for which test cases can be generated.

Equivalence Class: A portion of a component input or output domain for which the component behavior is assumed to be the same from the component specification.

Boundary Value Testing:
Boundary Testing: Test which focus on the boundary or limit condition of the software begins tested.

Boundary Value Analysis: BVA is similar to equivalence partitioning but focus on “cover –cases”, or values that are usually out of range as defined by the specification. This means that if a function expects all values is range of negative 100 to positive 1000, test input would include negative 101 & positive 1001.

Error Guessing:
In software testing, error guessing is a test method in which test cases used to find bugs in programs are established based on experience in prior testing. The scope of test cases usually rely on the software tester involved, who uses past experience and intuition to determine what situations commonly cause software failure, or may cause errors to appear. Typical errors include divide by zero, null pointers, or invalid parameters.


Error guessing has no explicit rules for testing; test cases can be designed depending on the situation, either drawing from functional documents or when an unexpected/undocumented error is found while testing operations.

Test Matrics

Test Matrices:
Test matrices accomplish is analyzing the current level of maturity in testing and give a projection on how to go about testing activities by allowing us to set goals and product future trends.
Table showed are some examples there are many test matrices. 


STLC and Testing Terminology

Testing Life Cycle:

Determine the set of activities performed by tester for testing the software the product life cycle.

Software testing life cycle contains following activities’.

1. Test Plan Preparation

2. Test Specification preparation

3. Test Execution

4. Defect Reporting and Tracking



Test Drive: A program or test tool used to execute a test. Also know as Test Harness.

Test Environment: The hardware and software environment in which test will be run and any other software with which the software under test interacts.

Test Procedure: A document providing detailed instruction for the execution of one or more test cases.

Test Scripts: Commonly used to refer to the instruction for a particular test that will be carried out by an automated test tool.

Test Suite: A collection of tests used to validate the behavior of a product.

Test Scenario: TS is a set of action or test cases executed in sequence which represents a business operation.

Top down Testing: An approach to integration testing where the components at the top of the components hierarchy is tested first with lower level components being simulated by stubs. Tested components one then used to test lower level component. The process is repeated until the lowest level components have been tested.

Traceability Matrix: A document showing the relation-ship between requirement and test cases.

Test Bed: An execution environment configuration for testing may consists of specific hardware such as OS, Network topology, Configuration of the product under test, other application or system software etc. The test plan for a project should numerate the test bed(s) to be used.

Testability: The degree to which a system or component facilities the establishment of test criteria and the performance of test to determine whether those criteria have been met.

Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.

Functional Decomposition: A technique used during planning, analysis, and design; create a function hierarchy for the software.

Functional Specification: A document that describe in detail the characteristics of the product with regard to its intended features.

System/workflow Testing: Testing the entire project by simulating the real business operation performed by the end user.

Code coverage: An analysis method that determine which part of the software have been executed (covered) by the test case suite and which part have not been executed and therefore may requires additional attention.

Code Complete:  Phase of development when functionality is implemented in entirely bug fixes is all that are left. All functional found in the functional specifications have been implemented.

Benchmark Testing: Test that user represents sets of a program and data designed to evaluate the performance of computer hardware and software in a given configuration.

Test Case: which stimulated the particular action?

Cyclomatic Complexity:  A measure of the logical complexity of an algorithm used in white-box testing.

Data Flow diagram:  A modeling notation that represents a functional decomposition of a system.

Quality Policy:  The overall intension and direction of an organization as regards quality as formally expressed by top management.

Release Candidate: A pre-release version which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

Walkthrough: A review of requirement design or create characterized by the author of the material under review guiding the progression of the review.

Code Inspection: A formal testing (formal meeting) technique, when the programmer reviews source code with a group who ask question analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Walkthrough: A formal testing technique, when source code is tested by group with a small set of test cases, while the state of program variable is manually monitored to analyze the program logic and assumption

Quality Analysis: is an activity which establishes and evaluates the process which produces the product.

Quality Control: is an activity which verifies whether or not the product meets standards.

Product: includes the software associate data, documentation and all supporting and reporting paper work.

Process: includes all the activities involved in designing development enhancing and maintaining software.

Test Strategy: it describes the general objective of test activates. Test Strategy is the heart of the test plan. It control what testing are we conducting? How frequently are we conducting the testing, are we planning to automate the testing? On what operating system are we testing?

Test Plan: it is strategic document which describes how to perform testing on an application in an effective, efficient and optimized way. It contain sections like test strategy, test resource, test environment and test deliverable risk analysis.

Test process: Test process starts from the requirement stage itself which is called static testing. The whole process from the identification of the bug to its fixing is test process.

Use case: is documentation is derived from the SRS which helps in writing the code and test cases. It contains the step by step approach of the user on the application. Test cases can be created for normal course, exceptional course which are describe as use case.

Extreme Programming: is a software development approach for small team on risk prone project with unstable requirement. Programmers are expected to write unit and functional test code first before the application is developed. Test code is under source control along with the rest of the code.

High Level Design: HLD gives the overall system design in term of functional architecture and data base design. This is very useful for the development to understand the flow of the system. For this the entry criteria are the requirement document. And the exits criteria will the HDD, project standards, the functional design document and the database design document.

Low Level Design:  During the detailed phase, the view of the application developed during the high level design is broken down into modules and programs logic design is done for every program and then documented as program specifications. For every program a unit test plan is created.
The entry criteria for this will be the HLD document, and the exit criteria will be program specification and unit test plan (LLD).


Some more Testing Types

Adhoc Testing: Testing is done without any formal test plan or test case creation. It helps tester in learning the application prior starting with any other testing.

GUI: GUI testing is done to whether all the GUI standards are followed or not.
Microsoft GUI standard are the best standard for GUI.
Content:
Section I: Windows compliance standards:
1. Application
2. For each window in the application
3. Text boxes
4. Options (Radio buttons)
5. Check boxes
6. Command button
7. Drop down list boxes
8. Combo boxes
9. List boxes

Section 2: Tester’s screen validation checklist.
1. Aesthetic condition
2. Validation condition
3. Navigation condition
4. Usability condition
5. Data integrity condition
6. Modes (Editable read only) condition
7. General condition
8. Specific field tests
9. Data field test
10. Numeric field
11. Alpha field check

Exploratory Testing: This testing is similar to ad-hoc testing and is done in order to explore the application to find out the bugs by testing the product beyond the normal testing.

Soak Testing: is conducted to identify the system behavior for larger execution times memory leaks and buffer overflows are usually identified through this testing.

Installation Testing: is used to test whether the installation, un-installation, re-installation, and up gradation of the product is happening properly.

Interoperability Testing: is used to test how this release is interoperating with other software products and with earlier release of the same products this testing can also be called as backward compatibility testing.

Parallel/Audit Testing: Testing when the user reconciles the output of the new system to the output of the current system to verify the new system perform the operation currently.

Incremental Integration Testing: Continuous testing of an application as new functionality is recommended this may requires various aspects of application functionality be independent enough to work separately before all parts of the program are completed or that test drivers are developed as needed. This type of testing may be performed by programmers or by testers.


End to End Testing: Similar to system testing the macro end of the test scale involves testing of a complete application in a situation that mines real world use, such as interacting with a data base using network communication or interacting with other hardware application or system if appropriate.

Recovery/Error Testing: Testing how well a system recovers from crashed, hardware failure or other catastrophic problem.

Compatibility Testing: Testing how well software performs in a particular hardware/software/operating system/ networks etc… environment.

Comparison Testing: Testing that compares software weakness and strength to competing products.

Basic Path Testing: A white box test case design technique that users the algorithmic flow of the program to design tests.

Negative Testing: Testing aimed at showing software does not work. Also ‘Test to Fail’.

Positive Testing: Testing aimed at showing software works. Also known as ‘Test to Pass’.

Static Testing: Analysis of a program carried out without executing the program.

Dynamic Testing: Analysis of a program carried out with executing the program.

Traceability Matrix: A document showing the relationship between test requirement and test cases.

Security Testing: It is type of testing in which one will usually concentrate on the following areas.
I). Authentication Testing
II). Direct URL Testing
III).Firewall Leakage Testing.

I). Authentication Testing: It is a type of testing in which a test engineer will enter different combinations of user names and passwords in order to check whether only the authorized persons are accessing the applications or not.

II). Direct URL Testing: It is a type of testing in which a test engineer will specified the direct URL’s of secured pages and check whether they are been accessing or not.

III). Fire well Leakage Testing: It is type of testing in which one will perform testing on the application in his own style after understanding the requirements clearly.



Difference between Alpha Testing and Beta Testing:


Difference between Smoke Testing and Sanity Testing:

Smoke Testing: It is an initial type of testing once testing team gets a build needs to do the BVT(Build verification testing) or smoke testing for verifying the major functional component of the build. This has to be done based on the requirement.

Sanity Testing: it is done initial type of testing once testing team get a build needs to do the sanity testing for verifying the hardware/network discrepancies on that. This will ensure that build is functionality correctly according the business requirement without hardware/network issue.

A sanity test is a narrow regression test that focuses on one or few areas of functionality. Sanity testing is usually narrow/deep.

Ex: while user working on the build every 5 minutes gets restarted the testing team would not proceed the further testing. This kind of issue will be found in the sanity testing.




Software Development Life Cycle

Software Development Life Cycle:
The software development life cycle begins with the identification of requirement for software and ends with the formal verification of the development software against that requirement.



In V-Shaped model the development and testing phase are carried out parallel sequence i.e. Verification on one side and validation on other side.

Advantage: As the verification and validation are done along with the test management. The outcome of V shaped model is a quality product.

Drawback: Time consuming and costly model.


Water fall model: in this model the development and testing phase are carried out step by step process and a base line document is prepared.


Advantage: It’s simple model and easy to maintain, project implementation is very easy.

Drawback: Can’t incorporate new changes in the middle of the project development. Base line document.

Prototype Model (Rapid Prototype model):-
This is cyclic version of the linear model. In this model once the requirement analysis is done and the design for a prototype is made, the development process gets started. Once the prototype is created. It is given to the customer for evaluation. The customer tests the package and gives his/her feedback to the developer who refines the product according to the customer’s exact expectation. After a finite number of iterations, the final software package is given to the customer. In this methodology the software is evolves as a result of periodic shuttling of information between the customer and developer.

Photo typing is the process of quickly putting together a working model (a prototype) in order to test various aspects of a design illustrate ideas or feature and gather largely user feedback.




Advantage: When even the customer with the requirement that this is the best model together the clear requirement.

Drawbacks:
·         It is not a complete model.
·         Time consuming model
·         Prototype has to be build company cost.
·    The user may strict to the prototype and limit his requirements

Spiral Model: The spiral model is similar to the incremental model with more emphasis placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passed through these phases in iterations (called spirals in this model). The baseline spirals, starting in the planning phase requirement are gathered and risk is assessed. Each subsequent spiral builds on the baseline spiral.


Advantage:  
·         High amount of risk analysis.
·         Good for large and mission critical projects.
·         Software is produced early in the software life cycle

Disadvantage:
·         Can be costly model to use.
·         Risk analysis requires highly specific expertise.
·         Project’s success is highly dependent on the risk analysis phase.
·         Doesn’t work well for smaller projects.



Manual Testing

Manual Testing: It is a process in which all the phase of software testing life cycle like test planning, test development, test execution result analysis, bug tracking and reporting are accomplished successfully normally with human efforts.


Testing: An examination of the behavior of a program by executing on sample data sets.
OR
A Testing is a process of executing a program with the intent of finding bugs, and to improve the quality.

Software Testing: The process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specific requirement or to indentify difference b/w expected and actual result.

Purpose of Testing:
1. To uncover hidden errors
2. To achieve the maximum usability of the system
3. To Demonstrate expected performance of the system.

General reasons for Defect found in software system are:
 It improperly interprets requirements
 The users specify the wrong requirement
 The requirement are un-correctly recorded
 Error in program coding
 The design specification are incorrect
 Data entry errors

Types of Testing:

1. White Box Testing:
     o Testing is based on the source code.
     o Testing a functional with knowing the internal structure of the program.

Structure of the program:
     o Types: Statement coverage, Decision Coverage, Condition Coverage.

White box Testing can drive test case to ensure:
1. All independent paths are exercised at least once.
2. All logical decisions are executed for both true and false paths
3. All loops are executed at their boundaries and within operational bounds.
4. All internal data structure is executed to ensure validity.

Why white box testing when black box testing is used?
  • To confirm logical errors and incorrect assumptions are most likely to be made when coding for special cases. Need to ensure there execution path are tested.
  • May find assumption about execution path incorrect, and so make design errors. White box testing can find these errors.
  • Typographical errors are random. Just as likely to be on an obscure logical path as on a mainstream path.
Test Engineer Responsibilities:
• Analysis on Requirement document (SRS).
• Developing test cases.
• Test data planning and capturing
• Test execution
• Developing automated scripts for regression testing
• Preparing test documentation.
• Defect tracking and reporting.
• Participating in reviews.

To Test Effectively Test Engineer Must:
• Thoroughly understand the system.
• Thoroughly understand the technology the system is begin developed on.
• Possess creativity, insight and business knowledge
• Have the breaking attitude.
• Have good error guessing skills.

Verification: Ensure the system complies with organization standards and process using non-executable methods.
Ex: Requirement review, design review, code walkthrough, code inspection.

Validation: Ensure the system satisfies specified requirements by executing series of tests.
Ex: unit testing, Integration Testing, System Testing UAT….

Difference b/w White box and Black box Testing:



Advantage:

• Most effective on larger units of code than glass box testing.
• Tester needs no knowledge of implementation including specific programming languages.
• Tester and programmers are independent of each other
• Tests are done from user’s point of view.
• Test cases can be designed as soon as the specifications are complete.

Disadvantage:
• Only a small number of possible inputs can actually be tested to test every possible input stream would take nearly forever.
• Without clear and concise specification test cases are hard to design.
• There may be necessary repetition of test of inputs if the testers are not informed of test cases the programmer has already tried.
• May leave many program path untested.
• Cannot be directed toward specific segments of code which may be very complex and therefore error prone.
• Most testing related research has been directed toward glass box testing.


Automation Testing:  It is process in which all the drawbacks of manual testing are addressed (overcome) properly and provides speed and accuracy to the existing testing phase.

Bug Life Cycle

BUG
What is a BUG?
A bug is a deviation from the requirement.
Or       
Bug can be defined as the abnormal behavior of the software.

Classification of Bug:
  1. Functionality bug
  2. Security bug
  3. Interface bug
  4. Cosmetic bug
Guidelines on deciding the severity of bugs:
  1. Critical/show stopper: An item that prevents further testing of the product or function under test can be classified as critical bug. No work around is possible for such bugs.
Ex. A missing menu option or security permission required to access a function under test.

  1. Major/high: A defect that does not function as expected/designed or cause other functionality to fail to meet the requirement can be classified as major bug. The work around can be provided for such bug
Ex. Include inaccurate calculation the wrong field being updated.

  1. Average/Medium: The defects which don’t conform to standards and conventions can be classified as medium bugs. Easy workaround exists to achieve the functionality objectives.
Ex. Matching visual & test ink which lead to different end points.

  1. Minor/low: cosmetic defect which doesn’t affect the functionality of the system can be classified as minor bugs.
Difference b/w Bug, Defect, Error, and Fault/Failure:

Defect: if the test engineer with respect to the functionality identifies a problem then it is called defect.

Bug: If the developer accepts the defects that is called as bug.

Error: It is a problem related to the program.

Fault/Failure: The customer identifies the problem, after delivery it is called fault/failure.

Root Cause Analysis and Traceability Matrix

 Root Cause Analysis:

Root cause analysis forms the techniques or tools that are used to determine the reason for a problems occurring.

Schedule Variance: Schedule is the plan for executing a project and variance is the slippage of the test plan.

The Root Cause Analysis will be performed by the action listed below:

-          Understand the problem
-          Gather required information on the cause of the problem.
-          Identify all the major and minor issues that create the problem.
-          Find the root causes based on the evidence or issues.
-          Do recommendations targeting all the issues.
-          Implement all the recommendation.

The input to the root causes analysis will be a problem and output to the root cause analysis will be the solution that closes all problems.

Root Causes Analysis tools types:

Based on the problem encounter we choose the tool:

5 Ways: Mostly used with problems involving human factors or interactions.
Barrier Analysis: Treat the problem as a barrier and tiers to remove the barrier.

Change analysis: Mainly works on the risk area and tries to explore the same.

Failure mode and effect analysis: Commonly used to evaluate risk management priorities for mitigating known threat vulnerabilities.

Fish Bone Diagram or Ishikawa diagram or Cause & effect diagram:
The fish bone will help to visually display the many potential cause for a specific problem or effect.

Pareto Analysis: A statistical techniques in decision making that is used for selection of a limited number of tasks that produces significant overall effect.

Fault Tree Analysis: Visual models showing the logical relationship between the equipment failure, human errors and external events that causes the problem.

Fish Bone Analysis diagram


Its analysis provides a systematic way of looking at effects and the cause that create or contribute to those effects.

It looks like a fish skeleton where it is depicted which is why it is known as fish bone diagram.

Fish Bone Diagram for some of these reason stated below:

-          To study a problem/issue to determine the root cause.
-          To study all the possible reasons why a process is having difficulties or problems.
-          Identifying the areas for data collection.
-          To study why a process is not performing properly or producing the desired results.


What is Tractability matrix?
Requirement tracing is the process of documenting the links between the user requirements for the system you are building and the work products developed to implement and verify those requirements. These work product include software requirement, design specification, coding, testing and other artifacts of the software development process.


Disadvantage of not using Traceability matrix:
Ø  The system that is build may not have the necessary functionality to meet the customer and users needs and expectations.
Ø  If there are modifications in the design specifications, there is no means of tracking the changes.
Ø  If there is no mapping of test cases to the requirement, it may result in missing a major defect in the system.
Ø  The completed system may have ‘Extra’ functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.
Ø  If the code component that constitutes the customers high priority requirement is not known, then the areas that need to be worked first may not be known there the areas that chances of shipping a useful product on schedule.
 Ø  A seemingly simple request might involve changes to several parts of the system and if proper tractability   process is not followed, then the evaluation of the work that may be needed to satisfy the request may not be correctly evaluated. 








SQL command information

 1. Which is the subset of SQAL commands used to manipulate Oracle Database structures, including tables?      Data Definition Language (DDL...