Saturday, February 27

What is different in Rules and Constraints

Rules and Constraints are similar in functionality but there is a An little diffrence between them.Rules are used for backward compatibility . One the most exclusive diffrence is that we an bind rules to a datatypes whereas constraints are bound only to columns.So we can create our own datatype with the help of Rules and get the input according to that.

What is Log Shipping and its purpose

http://msdn.microsoft.com/en-us/library/ms187103.aspx

Log shipping allows you to automatically send transaction log backups from a primary database on a primary server instance to one or more secondary databases on separate secondary server instances. The transaction log backups are applied to each of the secondary databases individually. An optional third server instance, known as the monitor server, records the history and status of backup and restore operations and, optionally, raises alerts if these operations fail to occur as scheduled.

Log Shipping Operations
Log shipping consists of three operations:

Back up the transaction log at the primary server instance.
Copy the transaction log file to the secondary server instance.
Restore the log backup on the secondary server instance.
The log can be shipped to multiple secondary server instances. In such cases, operations 2 and 3 are duplicated for each secondary server instance.

A log shipping configuration does not automatically fail over from the primary server to the secondary server. If the primary database becomes unavailable, any of the secondary databases can be brought online manually.

You can use a secondary database for reporting purposes. For more information, see Using Secondary Servers for Query Processing.

In addition, you can configure alerts for your log shipping configuration.

Primary Server and Database
The primary server in a log shipping configuration is the instance of the SQL Server Database Engine that is your production server. The primary database is the database on the primary server that you want to back up to another server. All administration of the log shipping configuration through SQL Server Management Studio is performed from the primary database.

The primary database must use the full or bulk-logged recovery model; switching the database to simple recovery will cause log shipping to stop functioning.

Secondary Server and Databases
The secondary server in a log shipping configuration is the server where you want to keep a warm standby copy of your primary database. A secondary server can contain backup copies of databases from several different primary servers. For example, a department could have five servers, each running a mission-critical database system. Rather than having five separate secondary servers, a single secondary server could be used. The backups from the five primary systems could be loaded onto the single backup system, reducing the number of resources required and saving money. It is unlikely that more than one primary system would fail at the same time. Additionally, to cover the remote chance that more than one primary system becomes unavailable at the same time, the secondary server could be of higher specification than the primary servers.

The secondary database must be initialized by restoring a full backup of the primary database. The restore can be completed using either the NORECOVERY or STANDBY option. This can be done manually or through SQL Server Management Studio.

Monitor Server
The optional monitor server tracks all of the details of log shipping, including:

When the transaction log on the primary database was last backed up.
When the secondary servers last copied and restored the backup files.
Information about any backup failure alerts.
The monitor server should be on a server separate from the primary or secondary servers to avoid losing critical information and disrupting monitoring if the primary or secondary server is lost. A single monitor server can monitor multiple log shipping configurations. In such a case, all of the log shipping configurations that use that monitor server would share a single alert job.

Important:
Once the monitor server has been configured, it cannot be changed without removing log shipping first.
For more information, see Monitoring Log Shipping.

Log Shipping Jobs
Log shipping involves four jobs, which are handled by dedicated SQL Server Agent jobs. These jobs include the backup job, copy job, restore job, and alert job.

The user controls how frequently log backups are taken, how frequently they are copied to each secondary server, and how frequently they are applied to the secondary database. To reduce the work required to bring a secondary server online, for example after the production system fails, you can copy and restore each transaction log backup soon after it is created. Alternatively, perhaps on a second secondary server, you can delay applying transaction log backups to the secondary database. This delay provides an interval during which you can notice and respond to a failure on the primary, such as accidental deletion of critical data.

Backup Job
A backup job is created on the primary server instance for each primary database. It performs the backup operation, logs history to the local server and the monitor server, and deletes old backup files and history information. By default, this job will run every 15 minutes, but the interval is customizable.

When log shipping is enabled, the SQL Server Agent job category "Log Shipping Backup" is created on the primary server instance.

SQL Server 2008 Enterprise and later versions support backup compression. When creating a log shipping configuration, you can control the backup compression behavior of log backups. For more information, see Backup Compression (SQL Server).

Copy Job
A copy job is created on each secondary server instance in a log shipping configuration. This job copies the backup files from the primary server to a configurable destination on the secondary server and logs history on the secondary server and the monitor server. The copy job schedule, which is customizable, should approximate the backup schedule.

When log shipping is enabled, the SQL Server Agent job category "Log Shipping Copy" is created on the secondary server instance.

Restore Job
A restore job is created on the secondary server instance for each log shipping configuration. This job restores the copied backup files to the secondary databases. It logs history on the local server and the monitor server, and deletes old files and old history information. The SQL Server job category "Log Shipping Restore" is created on the secondary server instance when log shipping is enabled.

On a given secondary server instance, the restore job can be scheduled as frequently as the copy job, or the restore job can delayed. Scheduling these jobs with the same frequency keeps the secondary database as closely aligned with the primary database as possible to create a warm standby database.

In contrast, delaying restore jobs, perhaps by several hours, can be useful in the event of a serious user error, such as a dropped table or inappropriately deleted table row. If the time of the error is known, you can move that secondary database forward to a time soon before the error. Then you can export the lost data and import it back into the primary database.

Alert Job
If a monitor server is used, an alert job is created on the monitor server instance. This alert job is shared by the primary and secondary databases of all log shipping configurations using this monitor server instance. Any change to the alert job (such as rescheduling, disabling, or enabling the job) affects all databases using that monitor server. This job raises alerts (for which you must specify alert numbers) for primary and secondary databases when backup and restore operations have not completed successfully within specified thresholds. You must configure these alerts to have an operator receive notification of the log shipping failure. The SQL Server Agent job category "Log Shipping Alert" is created on the monitor server instance when log shipping is enabled.

If a monitor server is not used, alert jobs are created locally on the primary server instance and each secondary server instance. The alert job on the primary server instance raises errors when backup operations have not completed successfully within a specified threshold. The alert job on the secondary server instance raises errors when local copy and restore operations have not completed successfully within a specified threshold.

A Typical Log Shipping Configuration
The following figure shows a log shipping configuration with the primary server instance, three secondary server instances, and a monitor server instance. The figure illustrates the steps performed by backup, copy, and restore jobs, as follows:

The primary server instance runs the backup job to back up the transaction log on the primary database. This server instance then places the log backup into a primary log-backup file, which it sends to the backup folder. In this figure, the backup folder is on a shared directory—the backup share.
Each of the three secondary server instances runs its own copy job to copy the primary log-backup file to its own local destination folder.
Each secondary server instance runs its own restore job to restore the log backup from the local destination folder onto the local secondary database.
The primary and secondary server instances send their own history and status to the monitor server instance.

What is the difference between UNION ALL Statement and UNION

The main difference between UNION ALL statement and UNION is UNION All statement is much faster than UNION,the reason behind this is that because UNION ALL statement does not look for duplicate rows, but on the other hand UNION statement does look for duplicate rows, whether or not they exist.

What is the diffrence between SQL and Pl Sql

We can get modify, Retrieve by single command or statement in SQL but PL/SQL process all SQL statements one at a time. With PL/SQL, an entire block of statements process in a single command line.sql is structured query language ,various queries are used to handle the database in a simplified manner. while pl/sql is procedural language contains various types of variable,functions and procedures and other major diffrence is Sql as the name suggest it is just
structured query language wheareas PLSQL is a commbination of Programming language & SQL.

What is heap table in SQL SERVER

A table that doesnot contains cluster and non cluster index is heap table because there is no indexing there on to the table.

Diffrence between temp table and table variable

(1)Temp Tables are created in the SQL Server TEMPDB database and therefore require more IO resources and locking. Table Variables and Derived Tables are created in memory.
(2)Temp Tables will generally perform better for large amounts of data that can be worked on using parallelism whereas Table Variables are best used for small amounts of data (I use a rule of thumb of 100 or less rows) where parallelism would not provide a significant performance improvement.

(3)You cannot use a stored procedure to insert data into a Table Variable or Derived Table. For example, the following will work: INSERT INTO #MyTempTable EXEC dbo.GetPolicies_sp whereas the following will generate an error: INSERT INTO @MyTableVariable EXEC dbo.GetPolicies_sp.
(4)Derived Tables can only be created from a SELECT statement but can be used within an Insert, Update, or Delete statement.

(5) In order of scope endurance, Temp Tables extend the furthest in scope, followed by Table Variables, and finally Derived Tables.

Difference between Triggers and Storedprocedures

Triggers are basically used to implement business rules.Triggers is also similar to stored procedures.The difference is that it can be activated when data is added or edited or deleted from a table in a database.Triggers are special kind of stored procedures that get executed automatically when an INSERT,UPDATE or DELETE operation takes place on a table.

What is a join and List different types of joins

Joins are used in queries to explain how different tables are related. Joins also let you select data from a table depending upon data from another table. Types of joins:
INNER JOINs,OUTER JOINs,CROSS JOINs. OUTER JOINs are further classified as LEFT OUTER JOINS, RIGHT OUTER JOINS and FULL OUTER JOINS.

Why we use SET ROWCOUNT in Sql

This syntax is used in SQL Server to stop processing the query after the specified number of rows are returned.

Diffrence between varchar and nvarchar

An nvarchar column can store any Unicode data. A varchar column is restricted to an 8-bit codepage. Some people think that varchar should be used because it takes up less space. I believe this is not the correct answer. Codepage incompatabilities are a pain, and Unicode is the cure for codepage problems. With cheap disk and memory nowadays, there is really no reason to waste time mucking around with code pages anymore.

How many objects SQL Server contains

Here is the list of some of the more important database objects
database,The transaction log,Assemblies,Tables,Reports,Filegroups,Full-text catalogs,Diagrams,User-defined data types,Views,Roles,Stored procedures,Users,User Defined Functions

Different index configurations table in SQL

In SQL table can have following index configurations just like:- No indexes, A clustered index, A clustered index and many nonclustered indexes, A nonclustered index, Many nonclustered indexes

built in string function of SQL SERVER

ASCII, NCHAR, SOUNDEX, CHAR, PATINDEX, SPACE, CHARINDEX, REPLACE, STR, DIFFERENCE, QUOTENAME, STUFF, LEFT, REPLICATE, SUBSTRING, LEN, REVERSE, UNICODE, LOWER, RIGHT, UPPER, LTRIM, RTRIM

Difference between Set and Select

Set is a ANSI standard for variable assignment.
Select is a Non-ANSI standard when assigning variables.

We can assign only one variable at a time
We can assign multiple variable at a time

When assigning from a query that returns more than one value, SET will fail with an error.
When assigning from a query that returns more than one value, SELECT will assign the last value returned by the query and hide the fact that the query returned more than one row.

creating a store procedure to increase speed

Here are the some good tips when creating a store procedure
(1)Always use ANSI92 syntax avoid to use the old syntax.

(2)Use SQL keyword in capital letters to increase readability.
(3)Use few possible variables to increase cache memory.

(4)Try to avoid dynamic queries if we are not using dynamic query there is no recompilation of execution plan but on the other side if we are using dynamic query every time we need recompile of plan.
(5)Use SET NOCOUNT ON this will helps us to get number of row effected without network traffic.

(6)To avoid recompilations use OPTION KEEPFIXED PLAN.
(7)In Select and Set use select to assign value to variable it is much faster than multiple set statement.

(8)Try to avoid IN. IN counts Null values so use EXISTA there. EXISTS which return only boolean value and IN return heavier result than EXISTS.
(9)In CAST and CONVERT always try to use CAST it is ASNI92 standard.Use convert in case of datetime.

(10)Avoid Distinct and Order by clause.These class needs extra space.
(11)Avoid cursor so use while loop for that and temparory tables.

(12)Avoid to use * in select statement.
(13)Avoid correlated sub queries.

(14)Avoid select * into for large tables it locks the system objects.
(15)Avoid temporary tables because it will recompile the procedure.

Get top two records without Top keyword

set rowcount 2
select column,column1 from tblEmployeeMaster

Temporary table and there types

Temporary table name is limited to 116 characters. Local temporary table created with single # and then name of table. And this table is to be deleted explicitly and if it is created in store procedure it will be dropped at the end of store procedure. On the other hand if are talking abut Global variable tables start with ## and these are dropped on the session ends.
And some more facts about the Temporary tables is that these cannot be partitioned and we cannot create key constraints to these tables and one more thing is that we cannot create user-defined data types in tempdb.

SERVER Objects..SELECT @@

SELECT @@CONNECTIONS
SELECT @@CPU_BUSY
SELECT @@CURSOR_ROWS
SELECT @@DATEFIRST
SELECT @@DBTS
SELECT @@ERROR
SELECT @@FETCH_STATUS
SELECT @@IDENTITY
SELECT @@IDLE
SELECT @@IO_BUSY
SELECT @@LANGID
SELECT @@LANGUAGE
SELECT @@LOCK_TIMEOUT
SELECT @@MAX_CONNECTIONS
SELECT @@MAX_PRECISION
SELECT @@NESTLEVEL
SELECT @@OPTIONS
SELECT @@PACK_RECEIVED
SELECT @@PACK_SENT
SELECT @@PACKET_ERRORS
SELECT @@PROCID
SELECT @@REMSERVER
SELECT @@ROWCOUNT
SELECT @@SERVERNAME
SELECT @@SERVICENAME
SELECT @@SPID
SELECT @@TEXTSIZE
SELECT @@TIMETICKS
SELECT @@TOTAL_ERRORS
SELECT @@TOTAL_READ
SELECT @@TOTAL_WRITE
SELECT @@TRANCOUNT
SELECT @@VERSION

Friday, February 26

DB Mail Profile Delete

select * from msdb.dbo.sysmail_profile
msdb.dbo.sysmail_delete_profile_sp @profile_id =
Ex:
msdb.dbo.sysmail_delete_profile_sp @profile_id = 22

Views in SQL

In SQL Server, a view is a pre-written query that is stored on the database. A view consists of a SELECT statement, and when you run the view, you see the results of it like you would when opening a table. Some people like to think of a view as a virtual table. This is because a view can pull together data from multiple tables, as well as aggregate data, and present it as though it is a single table.

Benefits of Views
A view can be useful when there are multiple users with different levels of access, who all need to see portions of the data in the database (but not necessarily all of the data). Views can do the following:

Restrict access to specific rows in a table
Restrict access to specific columns in a table
Join columns from multiple tables and present them as though they are part of a single table
Present aggregate information (such as the results of the COUNT function)

SQL Server quick kit

http://www.quackit.com/sql_server/tutorial/

SPs Vs UD Functions

http://databases.about.com/od/sqlserver/a/procs_vs_functs.htm

SQL Server user-defined functions and stored procedures offer similar functionality. Both allow you to create bundles of SQL statements that are stored on the server for future use. This offers you a tremendous efficiency benefit, as you can save programming time by:
Reusing code from one program to another, cutting down on program development time
Hiding the SQL details, allowing database developers to worry about SQL and application developers to deal only in higher-level languages
Centralize maintenance, allowing you to make business logic changes in a single place that automatically affect all dependent applications
At first glance, functions and stored procedures seem identical. However, there are several subtle, yet important differences between the two:
Stored procedures are called independently, using the EXEC command, while functions are called from within another SQL statement.
Stored procedure allow you to enhance application security by granting users and applications permission to use stored procedures, rather than permission to access the underlying tables. Stored procedures provide the ability to restrict user actions at a much more granular level than standard SQL Server permissions. For example, if you have an inventory table that cashiers must update each time an item is sold (to decrement the inventory for that item by 1 unit), you can grant cashiers permission to use a decrement_item stored procedure, rather than allowing them to make arbitrary changes to the inventory table.
Functions must always return a value (either a scalar value or a table). Stored procedures may return a scalar value, a table value or nothing at all.
Overall, stored procedures are one of the greatest treasures available to SQL Server developers. I use them heavily in my databases and encourage you to do the same. The efficiency and security benefits you’ll reap are well worth the upfront investment in time.

Stored Procedures in SQL Server

http://databases.about.com/od/sqlserver/a/storedprocedure.htm

Microsoft SQL Server provides the stored procedure mechanism to simplify the database development process by grouping Transact-SQL statements into manageable blocks.
Benefits of Stored Procedures

Why should you use stored procedures? Let's take a look at the key benefits of this technology:
Precompiled execution. SQL Server compiles each stored procedure once and then reutilizes the execution plan. This results in tremendous performance boosts when stored procedures are called repeatedly.
Reduced client/server traffic. If network bandwidth is a concern in your environment, you'll be happy to learn that stored procedures can reduce long SQL queries to a single line that is transmitted over the wire.
Efficient reuse of code and programming abstraction. Stored procedures can be used by multiple users and client programs. If you utilize them in a planned manner, you'll find the development cycle takes less time.
Enhanced security controls. You can grant users permission to execute a stored procedure independently of underlying table permissions.
Stored procedures are very similar to user-defined functions, but there are subtle differences. For more information, read Comparing Stored Procedures and User-Defined Functions.
Structure

Stored procedures are extremely similar to the constructs seen in other programming languages. They accept data in the form of input parameters that are specified at execution time. These input parameters (if implemented) are utilized in the execution of a series of statements that produce some result. This result is returned to the calling environment through the use of a recordset, output parameters and a return code. That may sound like a mouthful, but you'll find that stored procedures are actually quite simple.
Example

Let's take a look at a practical example. Assume we have the table shown at the bottom of this page, named Inventory. This information is updated in real-time and warehouse managers are constantly checking the levels of products stored at their warehouse and available for shipment. In the past, each manager would run queries similar to the following:
SELECT Product, Quantity
FROM Inventory
WHERE Warehouse = 'FL'
This resulted in very inefficient performance at the SQL Server. Each time a warehouse manager executed the query, the database server was forced to recompile the query and execute it from scratch. It also required the warehouse manager to have knowledge of SQL and appropriate permissions to access the table information.

We can simplify this process through the use of a stored procedure. Let's create a procedure called sp_GetInventory that retrieves the inventory levels for a given warehouse. Here's the SQL code:
CREATE PROCEDURE sp_GetInventory
@location varchar(10)
AS
SELECT Product, Quantity
FROM Inventory
WHERE Warehouse = @location
Our Florida warehouse manager can then access inventory levels by issuing the command
EXECUTE sp_GetInventory 'FL'
The New York warehouse manager can use the same stored procedure to access that area's inventory.
EXECUTE sp_GetInventory 'NY'
Granted, this is a simple example, but the benefits of abstraction can be seen here. The warehouse manager does not need to understand SQL or the inner workings of the procedure. From a performance perspective, the stored procedure will work wonders. The SQL Sever creates an execution plan once and then reutilizes it by plugging in the appropriate parameters at execution time.

Now that you've learned the benefits of stored procedures, get out there and use them! Try a few examples and measure the performance enhancements achieved -- you'll be amazed!

Inventory Table
ID Product Warehouse Quantity
142 Green beans NY 100
214 Peas FL 200
825 Corn NY 140
512 Lima beans NY 180
491 Tomatoes FL 80
379 Watermelon FL 85

Thursday, February 25

heighest salary in any position...

SELECT netAmt FROM jppPurchaseOrder AS P
WHERE 1 = (SELECT COUNT(*) FROM jppPurchaseOrder WHERE netAmt>p.netAmt)

FOR '0' its Top1
FOR '1' its Top2
FOR '2' its Top3

etc...

for 2nd heighest

SELECT Max(netAmt) FROM jppPurchaseOrder
WHERE netAmt < (SELECT Max(netAmt) FROM jppPurchaseOrder)

Wednesday, February 24

Database Mail in SQL Server 2005

http://www.databasejournal.com/features/mssql/article.php/3626056/Database-Mail-in-SQL-Server-2005.htm

The SQL Mail problems, that we faced in SQL Server 7.0 and 2000, are no more. SQL Server 2005 supports and uses SMTP email now and there is no longer a need to MAPI client to send email. In SQL Server 2005, the mail feature is called Database Mail. In this article, I am going to demonstrate step-by-step, with illustrations, how to configure Database Mail and send email from SQL Server.

Database Mail has four components.

1. Configuration Component

Configuration component has two sub components. One is the Database Mail account, which contains information such as the SMTP server login, Email account, Login and password for SMTP mail.

The Second sub component is Database Mail Profile. Mail profile can be Public, meaning members of DatabaseMailUserRole in MSDB database can send email. For private profile, a set of users should be defined.

2. Messaging Component

Messaging component is basically all of the objects related to sending email stored in the MSDB database.

3. Database Mail Executable

Database Mail uses the DatabaseMail90.exe executable to send email.

4. Logging and Auditing component

Database Mail stores the log information on MSDB database and it can be queried using sysmail_event_log.

Step 1

Before setting up the Database Mail profile and accounts, we have to enable the Database Mail feature on the server. This can be done in two ways. The first method is to use Transact SQL to enable Database Mail. The second method is to use a GUI.

In the SQL Server Management Studio, execute the following statement.

use master
go
sp_configure 'show advanced options',1
go
reconfigure with override
go
sp_configure 'Database Mail XPs',1
--go
--sp_configure 'SQL Mail XPs',0
go
reconfigure
go
Alternatively, you could use the SQL Server Surface area configuration. Refer Fig 1.0.


Fig 1.0

Step 2

The Configuration Component Database account can be enabled by using the sysmail_add_account procedure. In this article, we are going create the account, "MyMailAccount," using mail.optonline.net as the mail server and

makclaire@optimumonline.net as the e-mail account.

Please execute the statement below.

EXECUTE msdb.dbo.sysmail_add_account_sp
@account_name = 'MyMailAccount',
@description = 'Mail account for Database Mail',
@email_address = 'makclaire@optonline.net',
@display_name = 'MyAccount',
@username='makclaire@optonline.net',
@password='abc123',
@mailserver_name = 'mail.optonline.net'
Step 3

The second sub component of the configuration requires us to create a Mail profile.

In this article, we are going to create "MyMailProfile" using the sysmail_add_profile procedure to create a Database Mail profile.

Please execute the statement below.

EXECUTE msdb.dbo.sysmail_add_profile_sp
@profile_name = 'MyMailProfile',
@description = 'Profile used for database mail'
Step 4

Now execute the sysmail_add_profileaccount procedure, to add the Database Mail account we created in step 2, to the Database Mail profile you created in step 3.

Please execute the statement below.

EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = 'MyMailProfile',
@account_name = 'MyMailAccount',
@sequence_number = 1
Step 5

Use the sysmail_add_principalprofile procedure to grant the Database Mail profile access to the msdb public database role and to make the profile the default Database Mail profile.

Please execute the statement below.

EXECUTE msdb.dbo.sysmail_add_principalprofile_sp
@profile_name = 'MyMailProfile',
@principal_name = 'public',
@is_default = 1 ;
Step 6

Now let us send a test email from SQL Server.

Please execute the statement below.

declare @body1 varchar(100)
set @body1 = 'Server :'+@@servername+ ' My First Database Email '
EXEC msdb.dbo.sp_send_dbmail @recipients='mak_999@yahoo.com',
@subject = 'My Mail Test',
@body = @body1,
@body_format = 'HTML' ;
You will get the message shown in Fig 1.1.


Fig 1.1

Moreover, in a few moments you will receive the email message shown in Fig 1.2.


Fig 1.2

You may get the error message below, if you haven't run the SQL statements from step 1.

Msg 15281, Level 16, State 1, Procedure sp_send_dbmail, Line 0
SQL Server blocked access to procedure 'dbo.sp_send_dbmail' of
component 'Database Mail XPs' because this component is turned off as part of
the security configuration for this server. A system administrator can enable
the use of 'Database Mail XPs' by using sp_configure. For more information
about enabling 'Database Mail XPs', see "Surface Area Configuration"
in SQL Server Books Online.
You may see this in the database mail log if port 25 is blocked. Refer Fig 1.3.


Fig 1.3

Please make sure port 25 is not blocked by a firewall or anti virus software etc. Refer Fig 1.4.


Fig 1.4

Step 7

You can check the configuration of the Database Mail profile and account using SQL Server Management Studio by right clicking Database Mail [Refer Fig 1.5] and clicking the Configuration. [Refer Fig 1.6]


Fig 1.5


Fig 1.6

Step 8

The log related to Database Mail can be viewed by executing the statement below. Refer Fig 1.7.

SELECT * FROM msdb.dbo.sysmail_event_log

Fig 1.7

Conclusion

This article has demonstrated step-by-step instructions, with illustrations, how to configure Database Mail and send email from SQL Server.

Trigger to send mail Alert

http://blog.netnerds.net/2008/02/create-a-basic-sql-server-2005-trigger-to-send-e-mail-alerts/

For as many times as I have read about sending e-mails using SQL Server triggers, I've rarely come across actual code samples. After someone asked for a "Triggers for Dummies" example in a Facebook SQL group, I created the following example which uses a trigger to alert a manager that an expensive item has been entered into inventory.

First, if SQL Mail isn't enabled and a profile hasn't been created, we must do so.

--// First, enable SQL SMail
use master
go
sp_configure 'show advanced options',1
go
reconfigure with override
go
sp_configure 'Database Mail XPs',1
go
reconfigure
go

--//Now create the mail profile. CHANGE @email_address,@display_name and @mailserver_name VALUES to support your environment
EXECUTE msdb.dbo.sysmail_add_account_sp
@account_name = 'DBMailAccount',
@email_address = 'sqlserver@domain.com',
@display_name = 'SQL Server Mailer',
@mailserver_name = 'exchangeServer'

EXECUTE msdb.dbo.sysmail_add_profile_sp
@profile_name = 'DBMailProfile'

EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = 'DBMailProfile',
@account_name = 'DBMailAccount',
@sequence_number = 1 ;
Now that SQL will support sending e-mails, let's create the sample table. This is not a useful or well designed table by any means -- it's just a simple example table:

CREATE TABLE dbo.inventory (
item varchar(50),
price money
)
GO
Now that SQL mail and the table are setup, we will create a trigger that does the following:

Creates an AFTER INSERT trigger named expensiveInventoryMailer on the inventory table. This means that the trigger will be executed after the data has been entered.
Checks for items being entered that have a price of $1000 or more
If there is a match, an email is sent using the SQL Mail profile we used above.
CREATE TRIGGER expensiveInventoryMailer ON dbo.inventory AFTER INSERT AS

DECLARE @price money
DECLARE @item varchar(50)

SET @price = (SELECT price FROM inserted)
SET @item = (SELECT item FROM inserted)

IF @price >= 1000
BEGIN
DECLARE @msg varchar(500)
SET @msg = 'Expensive item "' + @item + '" entered into inventory at $' + CAST(@price as varchar(10)) + '.'
--// CHANGE THE VALUE FOR @recipients
EXEC msdb.dbo.sp_send_dbmail @recipients=N'manager@domain.com', @body= @msg, @subject = 'SQL Server Trigger Mail', @profile_name = 'DBMailProfile'
END
GO
The only way to test a trigger is to add actual data, so let's do that here:
insert into inventory (item,price) values ('Vase',100)
insert into inventory (item,price) values ('Oven',1000)

Your email should arrive very quickly. If it doesn't, check the SQL Server mail log in SQL Management Studio by running SELECT * FROM sysmail_allitems.

Have fun!

Database mail.

http://articles.techrepublic.com.com/5100-10878_11-6161839.html


Database Mail, a new addition to the SQL Server 2005 database engine, is as simple to use as it is useful. Destined to be the replacement for SQL Mail, Database Mail uses a Simple Mail Transfer Protocol (SMTP) server to send e-mails rather than using the MAPI accounts that SQL Mail required. This allows your organization to send e-mails with attachments, e-mail query results, attach query results, and format HTML e-mails. It also gives you the ability to set many other configuration settings without requiring you to have an Exchange Server or configuring any type of MAPI workaround.

The advantages of Database Mail Get SQL tips in your inbox
TechRepublic's SQL Server newsletter, delivered each Tuesday, contains hands-on tips that will help you become more adept with this powerful relational database management system.
Automatically sign up today!
Besides being totally SMTP based, Database Mail has many other advantages:

It runs outside of the database engine, so the stress placed on the database engine is minimal.
It is cluster aware and fully supported in a clustered environment.
Its profiles allow for the redundant use of SMTP servers. (I go into more detail about this later in the article.)
It allows you to send a query text as a parameter to the stored procedure, which will execute the query and send the results in the e-mail.
The messages are sent asynchronously via a Service Broker queue, so you will not have to wait for a response when sending an e-mail.
It has multiple security features for sending e-mail, such as a filter for what attachment extensions can be sent and a governor for attachment size.
Setting up and using Database Mail
A bit of planning is required before you can set up a Database Mail solution. First, you need to have an available SMTP server so that e-mails can be sent. If you do not have an available SMTP server, read Microsoft Knowledge Base article 308161 for tips on setting one up. If you are not sure whether your organization has an SMTP server, talk to your network administrator to obtain the machine name or the IP address of the server. Your network administrator may need to configure the server so that e-mails can be sent from your SQL Server machine.

In Database Mail, the Account holds information that the database engine uses to send e-mail messages. An Account holds information for only one e-mail server, such as the account name, e-mail address, reply-to e-mail address, server name or IP address, and some optional security settings.

To send a Database Mail e-mail, a Profile must be used. A Profile is set up of one or more Accounts. This Profile-Account setup is very useful for a couple of reasons. It allows you to associate more than one Account to a profile, which means that you can associate more than one e-mail server to a profile. So, when you try to send an e-mail, each Account for the profile is tried until the message is successfully sent, which is great in case one or more of your SMTP servers is unavailable. It also allows you to develop your application code for sending e-mails without worrying about changing the Profile name for different environments. You can use the same Profile name for your Development and Production environments; the only difference is that the Accounts contained in the profiles will be different.

It is time to take a look at how to set up a Database Mail account. For our example, I will assume that you are sitting in front of a development machine for which you have sysadmin access. If you are not, you will need to be a member of the DatabaseMailUserRole in the msdb database.

The following script sets up some variables that I will use throughout the example. Note that the entire script will be run in the context of the msdb database, where Database Mail objects are stored.

USE msdb

GO

DECLARE @ProfileName VARCHAR(255)

DECLARE @AccountName VARCHAR(255)

DECLARE @SMTPAddress VARCHAR(255)
DECLARE @EmailAddressVARCHAR(128)

DECLARE @DisplayUser VARCHAR(128)
Here I am setting up our ProfileName, AccountName, STMP server name, and the name that will display in the From field in the e-mail.

SET @ProfileName = 'DBMailProfile';

SET @AccountName = 'DBMailAccount';

SET @SMTPAddress = 'mail.yoursmtpserver.com';
SET @EmailAddress = 'DBMail@yoursmtpserver.com';

SET @DisplayUser = 'The Mail Man';
The script in Listing A does some clean up work, so if I run the script again, I won't have to worry about errors.

The following section adds our Account, Profile, and Profile-Account association to the system.

EXECUTE msdb.dbo.sysmail_add_account_sp

@account_name = @AccountName,

@email_address = @EmailAddress,
@display_name = @DisplayUser,

@mailserver_name = @SMTPAddress



EXECUTE msdb.dbo.sysmail_add_profile_sp

@profile_name = @ProfileName



EXECUTE msdb.dbo.sysmail_add_profileaccount_sp

@profile_name = @ProfileName,

@account_name = @AccountName,

@sequence_number = 1 ;
Now that everything is set up, I will send a test e-mail.

EXEC msdb.dbo.sp_send_dbmail

@recipients=N'chapman.tim@gmail.com',

@body= 'Test Email Body',

@subject = 'Test Email Subject',

@profile_name = @ProfileName
To see if the message was sent successfully, I can run a query on the sysmail_allitems system view.

SELECT * FROM sysmail_allitems

Triggers in SQL

http://www.devarticles.com/c/a/SQL-Server/Using-Triggers-In-MS-SQL-Server/

Triggers allow us to execute a batch of SQL code when either an insert, update or delete command is executed against a specific table. In this article David will describe exactly what triggers are, he will show you how to create new triggers from scratch, how to test triggers, and also provide you with some valuable links/books to help you learn more about triggers.Designing a database for a large-scale web application is a monstrous task to say the least. It can however, be made easier by taking full advantage of the many tools, utilities and built-in objects that come with a Relational Database Management System (RDBMS), such as Microsoft SQL Server 7/2000.

One of these objects that many developers overlook is the trigger. Triggers are "attached" to a table and allow us to setup our database in such a way that whenever a record is added, updated, or deleted from a table, then SQL server will automatically execute a batch of SQL code after that table modification takes place.

We can do some really powerful things with the help of triggers. In this article I will describe exactly what triggers are, show you how to create new triggers from scratch, how to test those triggers, and also provide you with some valuable links/books to learn more about triggers.


A trigger is an object contained within an SQL Server database that is used to execute a batch of SQL code whenever a specific event occurs. As the name suggests, a trigger is “fired” whenever an INSERT, UPDATE, or DELETE SQL command is executed against a specific table.

Triggers are associated with a single table, and are automatically executed internally by SQL Server. Let’s create a very basic trigger now (I am using Microsoft SQL Server 7.0 on a Windows 2000 machine).

Start by opening Enterprise Manager (Start -> Programs -> Microsoft SQL Server 7.0 -> Enterprise Manager). In this example we will create our trigger against the “authors” table of the "pubs" database, so drill down through the tree view in the left pane until you can see the "“authors" table of the "pubs" database in the right pane, like this:



Next, right click on the "authors" table and choose All Tasks -> Manage Triggers... this will open the trigger properties window, which allows us to create a new trigger:



Delete all the text in the text box; we won’t need it because we’re creating our trigger from absolute scratch. All triggers are created using the "CREATE TRIGGER" command. The syntax of the “"REATE TRIGGER" command is shown below:

CREATE TRIGGER trigger_name

ON { table | view }

[ WITH ENCRYPTION ]

{

{ { FOR | AFTER | INSTEAD OF } { [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] }

[ WITH APPEND ]

[ NOT FOR REPLICATION ]

AS

[ { IF UPDATE ( column )

[ { AND | OR } UPDATE ( column ) ]

[ ...n ]

| IF ( COLUMNS_UPDATED ( ) { bitwise_operator } updated_bitmask )

{ comparison_operator } column_bitmask [ ...n ]

} ]

sql_statement [ ...n ]

}

}

It looks really confusing, but it’s actually quite simple. I won’t go into detail about it, but you can visit http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_create2_7eeq.asp for the full explanation. Enter the following SQL code into the text box:

CREATE TRIGGER trig_addAuthor

ON authors

FOR INSERT

AS

-- Get the first and last name of new author

DECLARE @newName VARCHAR(100)

SELECT @newName = (SELECT au_fName + ' ' + au_lName FROM Inserted)

-- Print the name of the new author

PRINT 'New author "' + @newName + '" added.'

Click on the "OK" button. We have just created a new trigger named "trig_addAuthor", which is attached to the "authors" table of the "pubs" database. Whenever a new record is added to the "authors" table, SQL Server will automatically execute our trigger. Let’s discuss the actual SQL code that makes up the trigger:

CREATE TRIGGER trig_addAuthor

ON authors

These two lines tell SQL server that we want to create a new trigger object named "trig_addAuthor", which will be attached to the "authors" table.

FOR INSERT

Here, we have specified that our trigger will be executed whenever an "INSERT" command is executed against the "authors" table. Other possible options include "UPDATE" and "DELETE", which would be triggered when one/more rows in the "authors" table were either updated or deleted.

If we wanted, we could handle more than one type of query in one trigger. For example, to handle both "INSERT" and "UPDATE", we would use "FOR INSERT, UPDATE".

AS

DECLARE @newName VARCHAR(100)

SELECT @newName = (SELECT au_fName + ' ' + au_lName FROM Inserted)

Any code after the "AS" keyword is actually executed when the trigger is called. It’s important to note that this part of the trigger can contain any code that a standard stored procedure could contain. You can also call stored procedures using the "EXEC" command from within the body of the trigger.

We have created a new variable named "newName". "newName" is a variable length character value that can hold a maximum of one hundred characters. On the next line, we assign the value of an SQL query to the "newName" variable.

SELECT au_fName + ' ' + au_lName FROM Inserted

We can see that this SQL command retrieves the au_fName and au_lName fields from the "Inserted" table. The "Inserted" table is a virtual table which contains all of the fields and values from the actual "INSERT" command that made SQL Server call the trigger in the first place.

To understand what I mean, let's take a look at the design of the actual "authors" table in the "pubs" database. Right click on it and choose Design Table:



A typical "INSERT" query to add a record to the "authors" table might look like this:

INSERT INTO authors(au_id, au_lname, au_fname, phone, address, city, state, zip, contract) VALUES('172-85-4534', 'Doe', 'John', '123456', '1 Black Street', 'Doeville', 'CA', '90210', 0)

When SQL server processes this "INSERT" command, it creates a new virtual table, which contains all nine of the fields in the "INSERT" command. This table is named "Inserted", and is passed to the trig_addAuthor trigger. The table is named "Inserted" because it contains all of the newly added fields and values from our "INSERT" command.

If we created a trigger that was activated when we deleted a record from the "authors table (using the "FOR DELETE" syntax), then the virtual table would contain all of the fields and values from the deleted record(s), and would be named "Deleted".

Likewise, if we created a trigger for when an authors details were updated (using the "FOR UPDATE" syntax), then both the "Inserted" and "Deleted" virtual tables would be created and available from within the trigger. The "Deleted" table would contain all of the fields and values for the row(s) before they were updated, and the "Updated" table would contain the new row(s) with the updated fields and values.

When dealing with triggers, you must understand how they actually operate on the data contained within their virtual tables. Let’s say that we run an "UPDATE" command on the "authors" table, which has a trigger attached to it. The "UPDATE" command might affect more than one row.

When this is the case, the "UPDATE" trigger is called for each row that was affected by the update command. So, at any one time, each trigger only deals with one row.

PRINT 'New author "' + @newName + '" added.'

Lastly, we print the "newName" variable, which now contains the full name of the new author that has just been added.

To test our new trigger, start Query Analyzer (Start -> Programs -> Microsoft SQL Server 7.0 -> Query Analyzer) and connect to your database server. Enter the following code into the SQL query pane:

USE pubs

GO

SET NOCOUNT ON

INSERT INTO authors(au_id, au_lname, au_fname, phone, address, city, state, zip, contract)

VALUES('172-85-4534', 'Doe', 'John', '123456', '1 Black Street', 'Doeville', 'CA', '90210', 0)

Click the "Play" button, or press the F5 function key to execute our "INSERT" statement. SQL Server will add the new record to the "authors" table, automatically calling our "trig_addAuthor" trigger once it's done. This is shown in the example below:


Now that we understand how an "INSERT" trigger works, let's take a look at "UPDATE" and "DELETE" triggers. Here's an "UPDATE" trigger:

CREATE TRIGGER trig_updateAuthor

ON authors

FOR UPDATE

AS

DECLARE @oldName VARCHAR(100)

DECLARE @newName VARCHAR(100)

IF NOT UPDATE(au_fName) AND NOT UPDATE(au_lName)

BEGIN

RETURN

END

SELECT @oldName = (SELECT au_fName + ' ' + au_lName FROM Deleted)

SELECT @newName = (SELECT au_fName + ' ' + au_lName FROM Inserted)

PRINT 'Name changed from "' + @oldName + '" to "' + @newName + '"'

This trigger would automatically be executed whenever we updated one/more records in the "authors" table. It starts out by creating two new variables: oldName and newName. The "UPDATE" function is used to check whether or not the "au_fName" and "au_lName" fields have been updated by the "UPDATE" query that executed the "trig_updateAuthor" trigger. If both fields haven't, then the trigger returns control to SQL server.

As I already mentioned, "UPDATE" triggers have access to two virtual tables: Deleted (which contains all of the fields and values for the records before they were updated), and Inserted (which contains all of the fields and values for the records after they have been updated). We get the value of the users name before the update from the "Deleted" table and store it in the "oldName" variable.

The updated name is stored in the "newName" variable, and is extracted from the virtual table, "Inserted". Lastly, both the authors name before and after the update query are printed.

So, if we ran an update query (through Query Analyzer) like this:

UPDATE authors

SET au_lName = 'Black'

WHERE au_id = '172-32-1176'

... then Query Analyzer would display the following text in the results pane:

Name changed from "John Doe" to "John Black"

Update triggers can also be used to check field constraints and relationships. The "contract" field of the "authors" table is a bit field representing whether or not this author has a contract with their publisher. The publisher may require notification of when an author who is on contract is removed from the "authors" table.

We could create a "DELETE" trigger on the "authors" table that would do this for us automatically:

CREATE TRIGGER trig_delAuthor

ON authors

FOR DELETE

AS

DECLARE @isOnContract BIT

SELECT @isOnContract = (SELECT contract FROM Deleted)

IF(@isOnContract = 1)

BEGIN

PRINT "Code to notify publisher goes here"

END

The "DELETE" trigger follows the same format and keyword syntax as the "INSERT" and "UPDATE" triggers. The only difference is that the "DELETE" trigger has access to the virtual table "Deleted", which contains all of the deleted rows from the "DELETE" command that triggered the "trig_delAuthor" trigger in the first place.

When triggers are used correctly, they can save a lot of development work. One of the main benefits of using triggers is that they are stored in a central repository (the database), meaning that they are accessible from all client applications / web pages that can connect to the database.

Before triggers came along, if we had a table that needed to be updated and we wanted to perform some actions after that update, then we would have to "hard code" the extra SQL into our application. This meant that if we wanted to change the code later down the track, then each client would need the updated version of our application. This is both annoying and time consuming.

If you're in the field of database design / development, and haven't seen triggers before, then you should take a look at some of the books and links below. They will give you some great information and examples about triggers and how to properly integrate them into your n-Tier applications.

Random Number Generator

There are many methods to generate random number in SQL Server.

Method 1 : Generate Random Numbers (Int) between Rang
---- Create the variables for the random number generation
DECLARE @Random INT;
DECLARE @Upper INT;
DECLARE @Lower INT

---- This will create a random number between 1 and 999
SET @Lower = 1 ---- The lowest random number
SET @Upper = 999 ---- The highest random number
SELECT @Random = ROUND(((@Upper - @Lower -1) * RAND() + @Lower), 0)
SELECT @Random

Method 2 : Generate Random Float Numbers
SELECT RAND( (DATEPART(mm, GETDATE()) * 100000 )
+ (DATEPART(ss, GETDATE()) * 1000 )
+ DATEPART(ms, GETDATE()) )

Method 3 : Random Numbers Quick Scripts
---- random float from 0 up to 20 - [0, 20)
SELECT 20*RAND()
-- random float from 10 up to 30 - [10, 30)
SELECT 10 + (30-10)*RAND()
--random integer BETWEEN 0
AND 20 - [0, 20]
SELECT CONVERT(INT, (20+1)*RAND())
----random integer BETWEEN 10
AND 30 - [10, 30]
SELECT 10 + CONVERT(INT, (30-10+1)*RAND())

Method 4 : Random Numbers (Float, Int) Tables Based with Time
DECLARE @t TABLE( randnum float )
DECLARE @cnt INT; SET @cnt = 0
WHILE @cnt <=10000
BEGIN
SET @cnt = @cnt + 1
INSERT INTO @t
SELECT RAND( (DATEPART(mm, GETDATE()) * 100000 )
+ (DATEPART(ss, GETDATE()) * 1000 )
+ DATEPART(ms, GETDATE()) )
END
SELECT randnum, COUNT(*)
FROM @t
GROUP BY randnum

Method 5 : Random number on a per row basis
---- The distribution is pretty good however there are the occasional peaks.
---- If you want to change the range of values just change the 1000 to the maximum value you want.
---- Use this as the source of a report server report and chart the results to see the distribution
SELECT randomNumber, COUNT(1) countOfRandomNumber
FROM (
SELECT ABS(CAST(NEWID() AS binary(6)) %1000) + 1 randomNumber
FROM sysobjects) sample
GROUP BY randomNumber
ORDER BY randomNumber

Tuesday, February 23

Limitation on Views

Views.

The ANSI_NULLS and QUOTED_IDENTIFIER options must be turned on when the view is created. Following SP will turn those options on.

sp_dboption 'ANSI_NULLS', TRUE
sp_dboption 'QUOTED_IDENTIFIER', TRUE

The ANSI_NULLS option must have been turned on during the creation of all the tables that are referenced by the view.

All the tables referenced by the view must be in the same database as the view.

All the tables referenced by the view must have the same owner as the view.

Indexed view must be created with the SCHEMABINDING option. This option prohibits the schema of the base tables from being changed (adding or dropping a column, for instance). To change the tables, Indexed view must be dropped.

Any user-defined functions referenced in the view must have been created with the SCHEMABINDING option as well.

Any functions referenced in an indexed view must be deterministic; deterministic functions return the same value each time they’re invoked with the same arguments.

A column can not be referenced twice in the SELECT statement unless all references, or all but one reference, to the column are made in a complex expression.
Illegal:
SELECT qty, orderid, qty
Legal:
SELECT qty, orderid, SUM(qty)

You can’t use ROWSET, UNION, TOP, ORDER BY, DISTINCT, COUNT(*), COMPUTE, or COMPUTE BY.

The AVG, MAX, MIN, STDEV, STDEVP, VAR, and VARP aggregate functions aren’t allowed in the SELECT statement.

A SUM() that references a nullable expression isn’t allowed.

CONTAINS and FREETEXT aren’t allowed in the SELECT statement.

If you use GROUP BY, you can’t use HAVING, ROLLUP, or CUBE, and you must use COUNT_BIG() in the select list.

You can’t modify more than one table at a time through a view.

If your view is based on aggregate functions, you can’t use it to modify data.

If your view is based on a table that contains fields that don’t allow null values yet your view doesn’t display those fields, then you won’t be able to insert new data.

Biggest Table in a DB

CREATE TABLE #temp (
table_name sysname ,
row_count INT,
reserved_size VARCHAR(50),
data_size VARCHAR(50),
index_size VARCHAR(50),
unused_size VARCHAR(50))
SET NOCOUNT ON
INSERT #temp
EXEC sp_msforeachtable 'sp_spaceused ''?'''
SELECT a.table_name,
a.row_count,
COUNT(*) AS col_count,
a.data_size
FROM #temp a
INNER JOIN information_schema.columns b
ON a.table_name collate database_default
= b.table_name collate database_default
GROUP BY a.table_name, a.row_count, a.data_size
ORDER BY CAST(REPLACE(a.data_size, ' KB', '') AS integer) DESC
DROP TABLE #temp

Simple Cursor

This is the simplest example of the SQL Server Cursor. I have used this all the time for any use of Cursor in my T-SQL.
DECLARE @AccountID INT
DECLARE @getAccountID CURSOR
SET @getAccountID = CURSOR FOR
SELECT Account_ID
FROM Accounts
OPEN @getAccountID
FETCH NEXT
FROM @getAccountID INTO @AccountID
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT @AccountID
FETCH NEXT
FROM @getAccountID INTO @AccountID
END
CLOSE @getAccountID
DEALLOCATE @getAccountID

SQL: Not In

EXCEPT or NOT EXIST clause can be used in place of LEFT JOIN or NOT IN for better peformance.
Example:
SELECT EmpNo, EmpName
FROM EmployeeRecord
WHERE Salary > 1000 AND Salary
NOT IN (SELECT Salary
FROM EmployeeRecord
WHERE Salary > 2000);

(Recomended)
SELECT EmpNo, EmpName
FROM EmployeeRecord
WHERE Salery > 1000
EXCEPT
SELECT EmpNo, EmpName
FROM EmployeeRecord
WHERE Salery > 2000
ORDER BY EmpName;

SQL Guidlines

http://blog.sqlauthority.com/2008/09/25/sql-server-guidelines-and-coding-standards/

SP faster

Use SET NOCOUNT ON at the beginning of SQL Batches, Stored Procedures and Triggers. This improves the performance of Stored Procedure.

SQL Faqs

What is RDBMS?
Relational Data Base Management Systems (RDBMS) are database management systems that maintain data records and indices in tables. Relationships may be created and maintained across and among the data and tables. In a relational database, relationships between data items are expressed by means of tables. Interdependencies among these tables are expressed by data values rather than by pointers. This allows a high degree of data independence. An RDBMS has the capability to recombine the data items from different files, providing powerful tools for data usage.

What is normalization?
Database normalization is a data design and organization process applied to data structures based on rules that help build relational databases. In relational database design, the process of organizing data to minimize redundancy. Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.
What are different normalization forms?

1NF: Eliminate Repeating Groups
Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.
2NF: Eliminate Redundant Data
If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3NF: Eliminate Columns Not Dependent On Key
If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key
BCNF: Boyce-Codd Normal Form
If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables.
4NF: Isolate Independent Multiple Relationships
No table may contain two or more 1:n or n:m relationships that are not directly related.
5NF: Isolate Semantically Related Multiple Relationships
There may be practical constrains on information that justify separating logically related many-to-many relationships.
ONF: Optimal Normal Form
A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
DKNF: Domain-Key Normal Form
A model free from all modification anomalies. Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first fulfill all the criteria of a 2NF and 1NF database.
What is Stored Procedure?
A stored procedure is a named group of SQL statements that have been previously created and stored in the server database. Stored procedures accept input parameters so that a single procedure can be used over the network by several clients using different input data. And when the procedure is modified, all clients automatically get the new version. Stored procedures reduce network traffic and improve performance. Stored procedures can be used to help ensure the integrity of the database.
e.g. sp_helpdb, sp_renamedb, sp_depends etc.

What is Trigger?
A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE) occurs. Triggers are stored in and managed by the DBMS.Triggers are used to maintain the referential integrity of data by changing the data in a systematic fashion. A trigger cannot be called or executed; the DBMS automatically fires the trigger as a result of a data modification to the associated table. Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that is stored at the database level. Stored procedures, however, are not event-drive and are not attached to a specific table as triggers are. Stored procedures are explicitly executed by invoking a CALL to the procedure while triggers are implicitly executed. In addition, triggers can also execute stored procedures.
Nested Trigger: A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger.

What is View?
A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the view was created with. It should also be noted that as data in the original table changes, so does data in the view, as views are the way to look at part of the original table. The results of using a view are not permanently stored in the database. The data accessed through a view is actually constructed using standard T-SQL select command and can come from one to many different base tables or even other views.



What is Index?
An index is a physical structure containing pointers to the data. Indices are created in an existing table to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of a table, and each index is given a name. The users cannot see the indexes, they are just used to speed up queries. Effective indexes are one of the best ways to improve performance in a database application. A table scan happens when there is no index available to help a query. In a table scan SQL Server examines every row in the table to satisfy the query results. Table scans are sometimes unavoidable, but on large tables, scans have a terrific impact on performance.
Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this reason, each database table may have only one clustered index.

Non-clustered indexes are created outside of the database table and contain a sorted list of references to the table itself.

What is the difference between clustered and a non-clustered index?
A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.
What are the different index configurations a table can have?
A table can have one of the following index configurations:
• No indexes
• A clustered index
• A clustered index and many nonclustered indexes
• A nonclustered index
• Many nonclustered indexes




What is cursors?
Cursor is a database object used by applications to manipulate data in a set on a row-by-row basis, instead of the typical SQL commands that operate on all the rows in the set at one time.
• Declare cursor
• Open cursor
• Fetch row from the cursor
• Process fetched row
• Close cursor
• Deallocate cursor
What is the use of DBCC commands?
DBCC stands for database consistency checker. We use these commands to check the consistency of the databases, i.e., maintenance, validation task and status checks.
E.g. DBCC CHECKDB – Ensures that tables in the db and the indexes are correctly linked.
DBCC CHECKALLOC – To check that all pages in a db are correctly allocated.
DBCC CHECKFILEGROUP – Checks all tables file group for any damage.

What is a Linked Server?
Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query both the SQL Server dbs using T-SQL Statements. With a linked server, you can create very clean, easy to follow, SQL statements that allow remote data to be retrieved, joined and combined with local data. Storped Procedure sp_addlinkedserver, sp_addlinkedsrvlogin will be used add new Linked Server.
What is Collation?
Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case-sensitivity, accent marks, kana character types and character width.
What are different type of Collation Sensitivity?
Case sensitivity
A and a, B and b, etc.
Accent sensitivity
a and á, o and ó, etc.
Kana Sensitivity
When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana sensitive.
Width sensitivity
When a single-byte character (half-width) and the same character when represented as a double-byte character (full-width) are treated differently then it is width sensitive.
What’s the difference between a primary key and a unique key?
Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.
How to implement one-to-one, one-to-many and many-to-many relationships while designing tables?
One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships.
One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships.
Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table.
What is a NOLOCK?
Using the NOLOCK query optimiser hint is generally considered good practice in order to improve concurrency on a busy system. When the NOLOCK hint is included in a SELECT statement, no locks are taken when data is read. The result is a Dirty Read, which means that another process could be updating the data at the exact time you are reading it. There are no guarantees that your query will retrieve the most recent data. The advantage to performance is that your reading of data will not block updates from taking place, and updates will not block your reading of data. SELECT statements take Shared (Read) locks. This means that multiple SELECT statements are allowed simultaneous access, but other processes are blocked from modifying the data. The updates will queue until all the reads have completed, and reads requested after the update will wait for the updates to complete. The result to your system is delay(blocking).

What is difference between DELETE & TRUNCATE commands?
Delete command removes the rows from a table based on the condition that we provide with a WHERE clause. Truncate will actually remove all the rows from a table and there will be no data in the table after we run the truncate command.

TRUNCATE
TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log.
TRUNCATE removes all rows from a table, but the table structure and its columns, constraints, indexes and so on remain. The counter used by an identity for new rows is reset to the seed for the column.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint.
Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE can not be Rolled back using logs.
TRUNCATE is DDL Command.
TRUNCATE Resets identity of the table.
DELETE
DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement.
DELETE Can be used with or without a WHERE clause
DELETE Activates Triggers.
DELETE Can be Rolled back using logs.
DELETE is DML Command.
DELETE does not reset identity of the table.
Difference between Function and Stored Procedure?
UDF can be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section where as
Stored procedures cannot be. UDFs that return tables can be treated as another rowset. This can be used in JOINs with other tables. Inline UDF’s can be though of as views that take parameters and can be used in JOINs and other Rowset operations.

When is the use of UPDATE_STATISTICS command?
This command is basically used when a large processing of data has occurred. If a large amount of deletions any modification or Bulk Copy into the tables has occurred, it has to update the indexes to take these changes into account. UPDATE_STATISTICS updates the indexes on these tables accordingly.

What types of Joins are possible with Sql Server?
Joins are used in queries to explain how different tables are related. Joins also let you select data from a table depending upon data from another table. Types of joins: INNER JOINs, OUTER JOINs, CROSS JOINs. OUTER JOINs are further classified as LEFT OUTER JOINS, RIGHT OUTER JOINS and FULL OUTER JOINS.

What is the difference between a HAVING CLAUSE and a WHERE CLAUSE?
Specifies a search condition for a group or an aggregate. HAVING can be used only with the SELECT statement. HAVING is typically used in a GROUP BY clause. When GROUP BY is not used, HAVING behaves like a WHERE clause. Having Clause is basically used only with the GROUP BY function in a query. WHERE Clause is applied to each row before they are part of the GROUP BY function in a query. HAVING criteria is applied after the the grouping of rows has occurred.
What is sub-query? Explain properties of sub-query.
Sub-queries are often referred to as sub-selects, as they allow a SELECT statement to be executed arbitrarily within the body of another SQL statement. A sub-query is executed by enclosing it in a set of parentheses. Sub-queries are generally used to return a single row as an atomic value, though they may be used to compare values against multiple rows with the IN keyword.
A subquery is a SELECT statement that is nested within another T-SQL statement. A subquery SELECT statement if executed independently of the T-SQL statement, in which it is nested, will return a result set. Meaning a subquery SELECT statement can standalone and is not depended on the statement in which it is nested. A subquery SELECT statement can return any number of values, and can be found in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING, and/or ORDER BY clauses of a T-SQL statement. A Subquery can also be used as a parameter to a function call. Basically a subquery can be used anywhere an expression can be used.

Properties of Sub-Query
A subquery must be enclosed in the parenthesis. A subquery must be put in the right hand of the comparison operator, and A subquery cannot contain a ORDER-BY clause. A query can contain more than one sub-queries.

What are types of sub-queries?
Single-row subquery, where the subquery returns only one row. Multiple-row subquery, where the subquery returns multiple rows,.and Multiple column subquery, where the subquery returns multiple columns.

What is SQL Profiler?
SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of Microsoft SQL Server. You can capture and save data about each event to a file or SQL Server table to analyze later. For example, you can monitor a production environment to see which stored procedures are hampering performance by executing too slowly.
Use SQL Profiler to monitor only the events in which you are interested. If traces are becoming too large, you can filter them based on the information you want, so that only a subset of the event data is collected. Monitoring too many events adds overhead to the server and the monitoring process and can cause the trace file or trace table to grow very large, especially when the monitoring process takes place over a long period of time.
What is User Defined Functions?
User-Defined Functions allow to define its own T-SQL functions that can accept 0 or more parameters and return a single scalar data value or a table data type.
What kind of User-Defined Functions can be created?
There are three types of User-Defined functions in SQL Server 2000 and they are Scalar, Inline Table-Valued and Multi-statement Table-valued.

Scalar User-Defined Function
A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value.
Inline Table-Value User-Defined Function
An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables.
Multi-statement Table-Value User-Defined Function
A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, It can be used in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.
Which TCP/IP port does SQL Server run on? How can it be changed?
SQL Server runs on port 1433. It can be changed from the Network Utility TCP/IP properties –> Port number.both on client and the server.
What are the authentication modes in SQL Server? How can it be changed?
Windows mode and mixed mode (SQL & Windows).
To change authentication mode in SQL Server click Start, Programs, Microsoft SQL Server and click SQL Enterprise Manager to run SQL Enterprise Manager from the Microsoft SQL Server program group. Select the server then from the Tools menu select SQL Server Configuration Properties, and choose the Security page.
Where are SQL server users names and passwords are stored in sql server?
They get stored in master db in the sysxlogins table.
Which command using Query Analyzer will give you the version of SQL server and operating system?
SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY('productlevel'),
What is SQL server agent?
SQL Server agent plays an important role in the day-to-day tasks of a database administrator (DBA). It is often overlooked as one of the main tools for SQL Server management. Its purpose is to ease the implementation of tasks for the DBA, with its full-function scheduling engine, which allows you to schedule your own jobs and scripts.
Can a stored procedure call itself or recursive stored procedure? How many level SP nesting possible?
Yes. Because Transact-SQL supports recursion, you can write stored procedures that call themselves. Recursion can be defined as a method of problem solving wherein the solution is arrived at by repetitively applying it to subsets of the problem. A common application of recursive logic is to perform numeric computations that lend themselves to repetitive evaluation by the same processing steps. Stored procedures are nested when one stored procedure calls another or executes managed code by referencing a CLR routine, type, or aggregate. You can nest stored procedures and managed code references up to 32 levels.
What is @@ERROR?
The @@ERROR automatic variable returns the error code of the last Transact-SQL statement. If there was no error, @@ERROR returns zero. Because @@ERROR is reset after each Transact-SQL statement, it must be saved to a variable if it is needed to process it further after checking it.
What is Raiseerror?
Stored procedures report errors to client applications via the RAISERROR command. RAISERROR doesn’t change the flow of a procedure; it merely displays an error message, sets the @@ERROR automatic variable, and optionally writes the message to the SQL Server error log and the NT application event log.



What is log shipping?
Log shipping is the process of automating the backup of database and transaction log files on a production SQL server, and then restoring them onto a standby server. Enterprise Editions only supports log shipping. In log shipping the transactional log file from one server is automatically updated into the backup database on the other server. If one server fails, the other server will have the same db can be used this as the Disaster Recovery plan. The key feature of log shipping is that is will automatically backup transaction logs throughout the day and automatically restore them on the standby server at defined interval.

What is the difference between a local and a global variable?
A local temporary table exists only for the duration of a connection or, if defined inside a compound statement, for the duration of the compound statement.
A global temporary table remains in the database permanently, but the rows exist only within a given connection. When connection are closed, the data in the global temporary table disappears. However, the table definition remains with the database for access when database is opened next time.

What command do we use to rename a db?
sp_renamedb ‘oldname’ , ‘newname’
If someone is using db it will not accept sp_renmaedb. In that case first bring db to single user using sp_dboptions. Use sp_renamedb to rename database. Use sp_dboptions to bring database to multi user mode.
What is sp_configure commands and set commands?
Use sp_configure to display or change server-level settings. To change database-level settings, use ALTER DATABASE. To change settings that affect only the current user session, use the SET statement.
What are the different types of replication? Explain.
The SQL Server 2000-supported replication types are as follows:
• Transactional
• Snapshot
• Merge
Snapshot replication distributes data exactly as it appears at a specific moment in time and does not monitor for updates to the data. Snapshot replication is best used as a method for replicating data that changes infrequently or where the most up-to-date values (low latency) are not a requirement. When synchronization occurs, the entire snapshot is generated and sent to Subscribers.
Transactional replication, an initial snapshot of data is applied at Subscribers, and then when data modifications are made at the Publisher, the individual transactions are captured and propagated to Subscribers.
Merge replication is the process of distributing data from Publisher to Subscribers, allowing the Publisher and Subscribers to make updates while connected or disconnected, and then merging the updates between sites when they are connected.
What are the OS services that the SQL Server installation adds?
MS SQL SERVER SERVICE, SQL AGENT SERVICE, DTC (Distribution transac co-ordinator)
What are three SQL keywords used to change or set someone’s permissions?
GRANT, DENY, and REVOKE.
What does it mean to have quoted_identifier on? What are the implications of having it off?
When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and literals must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be quoted and must follow all Transact-SQL rules for identifiers.
What is the STUFF function and how does it differ from the REPLACE function?
STUFF function to overwrite existing characters. Using this syntax, STUFF(string_expression, start, length, replacement_characters), string_expression is the string that will have characters substituted, start is the starting position, length is the number of characters in the string that are substituted, and replacement_characters are the new characters interjected into the string.
REPLACE function to replace existing characters of all occurance. Using this syntax REPLACE(string_expression, search_string, replacement_string), where every incidence of search_string found in the string_expression will be replaced with replacement_string.
Using query analyzer, name 3 ways to get an accurate count of the number of records in a table?
SELECT *
FROM table1
SELECT COUNT(*)
FROM table1
SELECT rows
FROM sysindexes
WHERE id = OBJECT_ID(table1)
AND indid < 2

How to rebuild Master Database?
Shutdown Microsoft SQL Server 2000, and then run Rebuildm.exe. This is located in the Program Files\Microsoft SQL Server\80\Tools\Binn directory. In the Rebuild Master dialog box, click Browse. In the Browse for Folder dialog box, select the \Data folder on the SQL Server 2000 compact disc or in the shared network directory from which SQL Server 2000 was installed, and then click OK.
Click Settings. In the Collation Settings dialog box, verify or change settings used for the master database and all other databases. Initially, the default collation settings are shown, but these may not match the collation selected during setup. You can select the same settings used during setup or select new collation settings. When done, click OK. In the Rebuild Master dialog box, click Rebuild to start the process. The Rebuild Master utility reinstalls the master database. To continue, you may need to stop a server that is running. Source: http://msdn2.microsoft.com/en-us/library/aa197950(SQL.80).aspx

What is the basic functions for master, msdb, model, tempdb databases?
The Master database holds information for all databases located on the SQL Server instance and is the glue that holds the engine together. Because SQL Server cannot start without a functioning master
database, you must administer this database with care. The msdb database stores information regarding database backups, SQL Agent information, DTS packages, SQL Server jobs, and some replication information such as for log shipping.
The tempdb holds temporary objects such as global and local temporary tables and stored procedures.
The model is essentially a template database used in the creation of any new user database created in the instance.
What are primary keys and foreign keys?
Primary keys are the unique identifiers for each row. They must contain unique values and cannot be null. Due to their importance in relational databases, Primary keys are the most fundamental of all
keys and constraints. A table can have only one Primary key.
Foreign keys are both a method of ensuring data integrity and a manifestation of the relationship between tables.




What is data integrity? Explain constraints?
Data integrity is an important feature in SQL Server. When used properly, it ensures that data is accurate, correct, and valid. It also acts as a trap for otherwise undetectable bugs within applications.
A PRIMARY KEY constraint is a unique identifier for a row within a database table. Every table should have a primary key constraint to uniquely identify each row and only one primary key constraint can be created for each table. The primary key constraints are used to enforce entity integrity.
A UNIQUE constraint enforces the uniqueness of the values in a set of columns, so no duplicate values are entered. The unique key constraints are used to enforce entity integrity as the primary key constraints.
A FOREIGN KEY constraint prevents any actions that would destroy links between tables with the corresponding data values. A foreign key in one table points to a primary key in another table. Foreign keys prevent actions that would leave rows with foreign key values when there are no primary keys with that value. The foreign key constraints are used to enforce referential integrity.
A CHECK constraint is used to limit the values that can be placed in a column. The check constraints are used to enforce domain integrity.
A NOT NULL constraint enforces that the column will not accept null values. The not null constraints are used to enforce domain integrity, as the check constraints.
What are the properties of the Relational tables?
Relational tables have six properties:
• Values are atomic.
• Column values are of the same kind.
• Each row is unique.
• The sequence of columns is insignificant.
• The sequence of rows is insignificant.
• Each column must have a unique name.
What is De-normalization?
De-normalization is the process of attempting to optimize the performance of a database by adding redundant data. It is sometimes necessary because current DBMSs implement the relational model poorly. A true relational DBMS would allow for a fully normalized database at the logical level, while providing physical storage of data that is tuned for high performance. De-normalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access.
How to get @@error and @@rowcount at the same time?
If @@Rowcount is checked after Error checking statement then it will have 0 as the value of @@Recordcount as it would have been reset.

And if @@Recordcount is checked before the error-checking statement then @@Error would get reset. To get @@error and @@rowcount at the same time do both in same statement and store them in local variable. SELECT @RC = @@ROWCOUNT, @ER = @@ERROR
What is Identity?

Identity (or AutoNumber) is a column that automatically generates numeric values. A start and increment value can be set, but most DBA leave these at 1. A GUID column also generates numbers, the value of this cannot be controled. Identity/GUID columns do not need to be indexed.

What is a Scheduled Jobs or What is a Scheduled Tasks?

Scheduled tasks let user automate processes that run on regular or predictable cycles. User can schedule administrative tasks, such as cube processing, to run during times of slow business activity. User can also determine the order in which tasks run by creating job steps within a SQL Server Agent job. E.g. Back up database, Update Stats of Tables. Job steps give user control over flow of execution. If one job fails, user can configure SQL Server Agent to continue to run the remaining tasks or to stop execution.
What is a table called, if it does not have neither Cluster nor Non-cluster Index? What is it used for?

Unindexed table or Heap. Microsoft Press Books and Book On Line (BOL) refers it as Heap.
A heap is a table that does not have a clustered index and, therefore, the pages are not linked by pointers. The IAM pages are the only structures that link the pages in a table together.
Unindexed tables are good for fast storing of data. Many times it is better to drop all indexes from table and than do bulk of inserts and to restore those indexes after that.
What is BCP? When does it used?

BulkCopy is a tool used to copy huge amount of data from tables and views. BCP does not copy the structures same as source to destination.
How do you load large data to the SQL server database?

BulkCopy is a tool used to copy huge amount of data from tables. BULK INSERT command helps to Imports a data file into a database table or view in a user-specified format.
Can we rewrite subqueries into simple select statements or with joins?

Subqueries can often be re-written to use a standard outer join, resulting in faster performance. As we may know, an outer join uses the plus sign (+) operator to tell the database to return all non-matching rows with NULL values. Hence we combine the outer join with a NULL test in the WHERE clause to reproduce the result set without using a sub-query.
Can SQL Servers linked to other servers like Oracle?

SQL Server can be lined to any server provided it has OLE-DB provider from Microsoft to allow a link. E.g. Oracle has a OLE-DB provider for oracle that Microsoft provides to add it as linked server to SQL Server group.
How to know which index a table is using?

SELECT table_name,index_name FROM user_constraints
How to copy the tables, schema and views from one SQL server to another?

Microsoft SQL Server 2000 Data Transformation Services (DTS) is a set of graphical tools and programmable objects that lets user extract, transform, and consolidate data from disparate sources into single or multiple destinations.
What is Self Join?

This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it involves a relationship with only one table. The common example is when company have a hierarchal reporting structure whereby one member of staff reports to another.
What is Cross Join?

A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied by the number of rows in the second table. The common example is when company wants to combine each product with a pricing table to analyze each product at each price.
Which virtual table does a trigger use?

Inserted and Deleted.
List few advantages of Stored Procedure.
• Stored procedure can reduced network traffic and latency, boosting application performance.
• Stored procedure execution plans can be reused, staying cached in SQL Server’s memory, reducing server overhead.
• Stored procedures help promote code reuse.
• Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients.
• Stored procedures provide better security to your data.
What is DataWarehousing?
• Subject-oriented, meaning that the data in the database is organized so that all the data elements relating to the same real-world event or object are linked together;
• Time-variant, meaning that the changes to the data in the database are tracked and recorded so that reports can be produced showing changes over time;
• Non-volatile, meaning that data in the database is never over-written or deleted, once committed, the data is static, read-only, but retained for future reporting;
• Integrated, meaning that the database contains data from most or all of an organization’s operational applications, and that this data is made consistent.
What is OLTP(OnLine Transaction Processing)?

In OLTP – online transaction processing systems relational database design use the discipline of data modeling and generally follow the Codd rules of data normalization in order to ensure absolute data integrity. Using these rules complex information is broken down into its most simple structures (a table) where all of the individual atomic level elements relate to each other and satisfy the normalization rules.
How do SQL server 2000 and XML linked? Can XML be used to access data?
FOR XML (ROW, AUTO, EXPLICIT)

You can execute SQL queries against existing relational databases to return results as XML rather than standard rowsets. These queries can be executed directly or from within stored procedures. To retrieve XML results, use the FOR XML clause of the SELECT statement and specify an XML mode of RAW, AUTO, or EXPLICIT.
OPENXML
OPENXML is a Transact-SQL keyword that provides a relational/rowset view over an in-memory XML document. OPENXML is a rowset provider similar to a table or a view. OPENXML provides a way to access XML data within the Transact-SQL context by transferring data from an XML document into the relational tables. Thus, OPENXML allows you to manage an XML document and its interaction with the relational environment.
What is an execution plan? When would you use it? How would you view the execution plan?
An execution plan is basically a road map that graphically or textually shows the data retrieval methods chosen by the SQL Server query optimizer for a stored procedure or ad-hoc query and is a very useful tool for a developer to understand the performance characteristics of a query or stored procedure since the plan is the one that SQL Server will place in its cache and use to execute the stored procedure or query. From within Query Analyzer is an option called “Show Execution Plan” (located on the Query drop-down menu). If this option is turned on it will display query execution plan in separate window when query is ran again.

Primitive data types

All the supported datatypes by the compiler of a language are known as primitive data types.

Topics to study

1.What is mutable and immutable..??
2.What is delegate ??
3. Event
4.Diif b/w 1.1/2.0/3.5 Framework..??
5.Wpf,wcf,wwf..??
6.Linq..??
7.Oops concepts..??
8.Queries..??
9. In a single update query i need to update all 1000 to 2000 and all 2000 to 3000 and all 3000 to 4000
10.emp salary lo..second top salary kavali

Events

Events enable a class or object to notify other classes or objects when something of interest occurs. The class that sends (or raises) the event is called the publisher and the classes that receive (or handle) the event are called subscribers.

Boxing And Unboxing C#

Boxing and unboxing is a essential concept inC#�s type system. With Boxing and unboxing one can link betweenvalue-types and reference-types by allowing any value of a value-typeto be converted to and from type object. Boxing and unboxing enables aunified view of the type system wherein a value of any type canultimately be treated as an object.

Converting a value type to reference type is called Boxing.Unboxing is an explicit operation.

C# provides a �unified type system�. Alltypes�including value types�derive from the type object. It is possibleto call object methods on any value, even values of �primitive� typessuch as int.

The example

using System;
class Test
{
static void Main() {
Console.WriteLine(3.ToString());
}
}

calls the object-defined ToString method on an integer literal.

The example

class Test
{
static void Main() {
int i = 1;
object o = i; // boxing
int j = (int) o; // unboxing
}
}

An int value can be converted to object and back again to int.

This example shows both boxing and unboxing.When a variable of a value type needs to be converted to a referencetype, an object box is allocated to hold the value, and the value iscopied into the box.

Unboxing is just the opposite. When an objectbox is cast back to its original value type, the value is copied out ofthe box and into the appropriate storage location.

Boxing conversions

A boxing conversion permits any value-type tobe implicitly converted to the type object or to any interface-typeimplemented by the value-type.Boxing a value of a value-type consists of allocating an objectinstance and copying the value-type value into that instance.

For example any value-type G, the boxing class would be declared as follows:

class vBox
{
G value;
G_Box(G g) {
value = g;
}
}

Boxing of a value v of type G now consists ofexecuting the expression new G_Box(v), and returning the resultinginstance as a value of type object.

Thus, the statements
int i = 12;
object box = i;

conceptually correspond to
int i = 12;
object box = new int_Box(i);

Boxing classes like G_Box and int_Box abovedon�t actually exist and the dynamic type of a boxed value isn�tactually a class type. Instead, a boxed value of type G has the dynamictype G, and a dynamic type check using the is operator can simplyreference type G. For example,
int i = 12;
object box = i;
if (box is int) {
Console.Write("Box contains an int");
}

will output the string �Box contains an int� on the console.

A boxing conversion implies making a copy ofthe value being boxed. This is different from a conversion of areference-type to type object, in which the value continues toreference the same instance and simply is regarded as the less derivedtype object.

For example, given the declaration

struct Point
{
public int x, y;
public Point(int x, int y) {
this.x = x;
this.y = y;
}
}

the following statements
Point p = new Point(10, 10);
object box = p;
p.x = 20;
Console.Write(((Point)box).x);
will output the value 10 on the console because the implicit boxingoperation that occurs in the assignment of p to box causes the value ofp to be copied. Had Point instead been declared a class, the value 20would be output because p and box would reference the same instance.

Unboxing conversions

An unboxing conversion permits an explicitconversion from type object to any value-type or from anyinterface-type to any value-type that implements the interface-type. Anunboxing operation consists of first checking that the object instanceis a boxed value of the given value-type, and then copying the valueout of the instance. unboxing conversion of an object box to avalue-type G consists of executing the expression ((G_Box)box).value.

Thus, the statementsobject box = 12;
int i = (int)box;

conceptually correspond to
object box = new int_Box(12);
int i = ((int_Box)box).value;
For an unboxing conversion to a given value-type to succeed atrun-time, the value of the source argument must be a reference to anobject that was previously created by boxing a value of thatvalue-type. If the source argument is null or a reference to anincompatible object, an InvalidCastException is thrown.

CONCLUSION :

This type system unification provides value types with the benefits of object-ness without introducing unnecessary overhead.

For programs that don�t need int values to actlike objects, int values are simply 32-bit values. For programs thatneed int values to behave like objects, this capability is available ondemand. This ability to treat value types as objects bridges the gapbetween value types and reference types that exists in most languages.