Jumping into ASP.NET Part 2: Creating the Data Storage Layer with SQL Server 2000

 

Mike Amundsen
October 2003

Applies to:
    Microsoft® ASP.NET
    Microsoft® Visual Studio® .NET
    Microsoft® SQL Server™ 2000

Summary: Learn how to create a database storage layer for the DotNetKB ASP.NET solution using SQL Server 2000 and Visual Studio .NET 2003. Along the way, security issues regarding SQL Server, IIS, and ASP.NET will also be covered. (23 printed pages)

Download JumpPart2Sample.msi.

Contents

Introduction
Creating the Database Diagram with Visual Studio .NET 2003
Authoring Stored Procedures with Visual Studio .NET 2003
Security Issues with IIS, ASP.NET and SQL Server
Creating ASP.NET Test Pages
Generating the Source Code and Install Script
Summary

Introduction

In the initial installment of "Jumping into ASP.NET" (Jumping Into ASP.NET Part 1: Application Planning and Design), we covered the basics of planning and designing an ASP.NET solution called DotNetKB. In this installment, we'll work through the details of creating the database storage layer using Microsoft® SQL Server™ 2000 and Microsoft Visual Studio® .NET 2003. This includes creating the database itself (adding the tables, indexes, constraints, and relations) and authoring the stored procedures used to access the data. Along the way, we'll also cover security issues regarding SQL Server, Internet Information Server (IIS), and ASP.NET. By the time we get to the end of this installment, we'll have a functional, full-featured, and secure data storage system for the DotNetKB solution.

Creating the Database Diagram with Visual Studio .NET 2003

One of the many cool aspects of Visual Studio .NET 2003 is that you can use it as your primary SQL Server editor for most tasks. If you've got the proper rights on the target database server, you can create databases, tables, indexes, constraints, relations, views, stored procedures, and functions all from the comfort of Visual Studio .NET 2003. Visual Studio .NET has a default database project you can use to start the process and even includes a number of very handy templates for creating tables, triggers, stored procedures, and so on. Finally, since you are using Visual Studio .NET as your editing environment for SQL Server, you can also use it to store all your SQL Server scripts in Microsoft Visual SourceSafe®. This is handy for shared projects and other cases where long-term maintenance is important.

Referencing User Scenarios

For the project in this series—DotNetKB—I've created a list of over 30 user scenarios that identify tasks required to support the application outlined in Part 1 of this series. We'll be using that list of user scenarios to identify data to store in tables and stored procedures to execute at runtime to manage the stored data. Below is a partial list of the user scenarios. The complete list can be found on the related support site, User Scenarios for DotNetKB Project.

  1. Get list of questions by keyword search (reverse-date order)

  2. Get list of questions by date (reverse-date order)

  3. Get list of questions by topic (alpha topic/reverse-date questions order)

  4. Get list of questions for a single topic (reverse-date order)

  5. Get list of questions without any responses (in reverse-date order)

  6. Get a count of questions

  7. Get a count of questions without responses

  8. Get a count of questions for a single topic

  9. Get a count of questions answered by a single expert

  10. Get a single question and list of responses (responses in reverse-date order)

  11. Add a new question

  12. Edit an existing question

  13. Delete an existing question and related responses

    As you can see from the list there are lots of tasks to perform just for question records alone. And you have response, topic, and expert records to deal with as well. But this is actually getting ahead of things a bit. First, you have to identify the data elements that need to be stored for each record (Question, Response, Topic, Expert) and arrange the results as tables in a database in SQL Server.

Creating the Database Project

The first thing to do is to open Visual Studio .NET 2003 and create a new Database Project. The Visual Studio .NET Database Project type is a bit of a secret since it is buried in the list of options when you start a new project. However, once you start using them, I think you'll find Database project types have lots of advantages and are well worth the effort to get started.

To create a new Database project with Visual Studio .NET, you can perform the following tasks:

  • Start Visual Studio .NET and (if the new project dialog does not automatically appear), select File -> New -> Project from the main menu.
  • When the New Project dialog appears, expand the Other Projects folder in the tree view list on the left and then click on the Database Projects folder. This will display the Database Project template on the right.
  • Now, type the project name in the Name: input box. In my case, I typed DotNetKB_Database, but you can type whatever you wish.
  • Next, click the OK button to create the project and open it in Visual Studio .NET.
  • You'll see a dialog appear asking you to select a database to associate with the project. Click Cancel for now. You'll create a new database and add it to your project in the next step.

Figure 1 shows how this looks in Visual Studio .NET as you are creating the project.

Aa478998.aspnet-jumpinto-part2-01(en-us,MSDN.10).gif

Figure 1. Creating a new database project

Now that you have a new database to work with, you can add a connection from that database to your project to start working with it within Visual Studio .NET 2003. Here's what you need to do:

  • In the Solution Explorer window, expand your dotNETKB_Database project to expose the Database References item.
  • Right-click over the Database References item and select New Database Reference... from the context menu to bring up the Data Link Properties dialog.
  • Enter the database server name where you added the DotNetKB database, enter your login credentials, and select DotNetKB from the drop-down menu.
  • Click the OK button to add the reference to your project.

Figure 2 shows how the dialog looks when completed.

Aa478998.aspnet-jumpinto-part2-02(en-us,MSDN.10).gif

Figure 2. Data Link Properties dialog

Now you have created the database and added it as a reference for your project. The next step is to define the tables needed to store the data for the DotNetKB solution.

Defining Database Tables with Visual Studio .NET

The easiest way to define database tables within Visual Studio .NET is to create a database diagram. This will allow you to define all the details in a WYSIWYG-like editor. All you need to do is expand the corresponding tree in the Server Explorer, right-click over the Database Diagrams item and select New Database Diagram... from the context menu to start a new blank diagram. The first time you start a diagram, you'll be asked to select a table from the database. Since you haven't created any yet, just dismiss the dialog. Now you're ready to start defining your tables.

The user scenario documents generated from Part 1 of this article series will have the information needed to define the tables. Usually the scenarios that define adding new records to the system are your best bet for figuring out the data you need to store. Sometimes you'll need to review other scenarios such as record updates or possibly even reporting scenarios to make sure there aren't some additional fields that you should be storing. In our case, the "add record" scenarios will work out just fine.

For example, below is the add scenario for questions:

Add a new question

Add a new question record into the system including the title, date/time, the topic id for the category to which this question belongs, and the question text. Optionally, supply the questioner name and email address. After adding the new question, return a unique question id integer to the caller.

The important nouns have been bolded. When reading through scenario descriptions, usually the nouns are a good indicator of data that needs to be stored or possibly table names (for example, question record is a dead give away, right?). Using the information above you can add a new table to the database diagram and define the columns needed. Below is a step-by-step example of how to add a table to the diagram.

  • Right-click over the diagram "surface" and select New Table... from the context menu. Enter Questions as the table name and click the OK button to add it to the diagram.

  • With the Questions table dialog in view, type the field information from the scenario shown above. For example, Column Name = ID, Data Type = int, Length = 4, and uncheck the Allow Nulls check box. Continue this for the rest of the table (see Figure 3).

    Aa478998.aspnet-jumpinto-part2-03(en-us,MSDN.10).gif

    Figure 3. Questions table

You'll notice there is a small gold key next to the first column (ID). This indicates that it is the primary key field of the table. You can set this by right-clicking over the column name in the list and selecting Primary Key from the context menu. Also, this ID column should be set as an auto-incrementing identity column. This will allow SQL Server to automatically generate unique integer values for each record added to the table. You can set this by right-clicking over the column, selecting Properties from the context menu and then select the Columns tab in the Property Pages dialog. The rest is pretty easy to work out (see Figure 4).

Aa478998.aspnet-jumpinto-part2-04(en-us,MSDN.10).gif

Figure 4. Columns tab selected in the Property Pages dialog

Using information from the "add topic record" and "add response record" scenarios, you can build the other two tables. Remember to create ID columns for each table and to mark those columns as identity columns and as the primary key. Figure 5 below shows the three tables completed.

Aa478998.aspnet-jumpinto-part2-05(en-us,MSDN.10).gif

Figure 5. The three completed tables

You'll notice that the tables have lines connecting them in the diagram. These lines indicate foreign key relationships between the tables. For example, the TopicID column in the Questions table is related to the ID column of the Topics table. By storing this relationship in the database, you can create rules that prevent improper data from being saved to the database. In this case, the relationship rule states that the only valid values allowed in the Questions.TopicID column are values that already exist in one of the records in the Topics.ID column.

You can define these relations by dragging the TopicID column from the Questions table and dropping it on the ID column of the Topics table. A dialog will appear showing you the details of the rule definition and asking you to confirm it by pressing the OK button (see Figure 6).

Aa478998.aspnet-jumpinto-part2-06(en-us,MSDN.10).gif

Figure 6. Create Relationship dialog

You can see that there is also a relation defined between Responses.QuestionID and Questions.ID.

Note   You may have noticed that there is no table defined for Experts right now. I decided to store the expert information in an XML file instead of the database. The primary reason for this was to give us a chance to talk about how to read and write XML data and be able to mix XML and relational data in the same application. We'll deal with the expert data in the next installment.

The database and tables have been defined. The actual storage of the data for our solution has now been covered. However, we still need methods for reading and writing information to the tables. For that, we'll define stored procedures in SQL Server.

Authoring Stored Procedures with Visual Studio .NET 2003

While the data tables define the way data is stored in the database, they do not explain how the data will be accessed. The details for reading and writing records, for recalling selected rows and columns from tables still needs to be taken care of. Often, developers will create ad hoc queries within their code to read and write the data. Not only is this inefficient, it is also a security problem. For this application, all data access work will be handled through SQL Server stored procedures (sometimes called "stored procs" or "sprocs"). Using stored procedures will increase the performance of the solution and make it more secure. In addition, using stored procedures adds a level of abstraction to the data layer that can shield other parts of the solution from minor changes in the data layout and format. This makes your solution more robust and easier to maintain over time.

Why Ad Hoc Queries Are Bad

It's not at all uncommon to read articles and see code samples that look like this:

   Private Function GetSomeData(ByVal ID As Integer) As SqlDataReader

         Dim strSQL As String

         strSQL = "SELECT * FROM MyTable WHERE ID=" & ID.ToString()


         cd = New SqlCommand

         With cd

               .CommandText = strSQL

               .CommandType = CommandType.Text

               .Connection = cn

               .Connection.Open()

               Return .ExecuteReader(CommandBehavior.CloseConnection)

         End With

      End Function

There are a couple of things that make the code above undesirable. First, by embedding the SQL query statement into the code, any changes in the data layer will require editing and recompiling the code layer, too. This is more than just inconvenient. It can introduce opportunities for other errors and generally messed up the separation between data services and code.

Second, by using string concatenation ("...WHERE ID=" & ID.ToString()) without any input validation, you expose your application to possible hacking. Basically, you've given malicious users an opportunity to inject additional SQL keywords into your code. For example, depending on your input patterns, it might be possible to enter not just "13" or "21" as valid table IDs; but to also enter "13; DELETE FROM USERS" or some other dangerous statements. While good input validation can protect you from most SQL injection attacks, it is better to remove all inline SQL statements completely to make it much harder to abuse your application's data.

Finally, inline SQL statements run slower than stored procedures. When you create a stored procedure and save it to the database, SQL Server evaluates the text and stores it in an optimized fashion that makes it easier for SQL Server to use at runtime. When you use inline ad hoc queries, that evaluation must be done every time someone runs that code. For applications that serve lots of users, that could be hundreds to evaluations of the same query per minute.

Stored procedures keep your code cleaner, provide an additional level of security, and can add performance improvements to your solution. All very good reasons to get rid of all inline queries and use sprocs instead.

Adding Stored Procedures to your Visual Studio .NET Database Project

It's really easy to create stored procedures with Visual Studio .NET 2003. First, you need to open a Database Project. You did that in the first part of this article. Then, you can create stored procedures from code templates or you can simply use Visual Studio .NET 2003 to edit new sprocs directly against the attached database in the Server Explorer window. For this article, I'll focus on the method that edits directly against the attached database server. Later, I'll show you how you can generate all the resulting scripts for later installation on remote servers.

Before getting into the mechanics of authoring sprocs with Visual Studio .NET 2003, it is important to make a few points about building solid stored procedures in general. First, it's a good idea to look at the whole process of creating and executing stored procedures as a full-fledged member of the multi-tier application model. Stored procedures offer a way to program your data access. By doing so, you gain control and add efficiencies to your overall solution. This means you should think of the stored procedure collection as an isolated tier in the application. And a good data access strategy will allow the stored procedures to exist as a stand-alone component. This means security, error handling, and any other details that make a good component layer need to also be present in your stored procedure layer. Basically, the idea is to approach the T-SQL language like you would any other high-level programming environment—not just as a way to build database queries.

Note   Now, I suspect that some readers might be thinking that programming SQL Server is not something they are prepared for or that it is something best left to DBA-types. While having database administrator experience is certainly helpful, you don't need to be a rocket scientist to do a good job at programming SQL Server. Like any other language, it takes some time and practice to get the feel of it, but it is not much different than other languages. If you can program in Microsoft Visual Basic® .NET, you can program in T-SQL, too.

Adding Stored Procedures Using Visual Studio .NET

Here's the nuts and bolts of adding stored procedures to an existing SQL Server database from within Visual Studio .NET 2003. You'll use the Server Explorer to open a new sproc template, make your edits, and then save it back to the database. Here is a step-by-step example:

  • Open Visual Studio .NET and open an existing Database project (like the one you started in earlier in this article) or start a new one.
  • In the Server Explorer, expand the Data Connections tree to locate the database you wish to work on (DotNetKB) and right-click over the Stored Procedures node to bring up the context menu.
  • Select New Stored Procedure from the context menu to open a stored procedure template in the Visual Studio .NET editor space. Now you're ready to start typing.
  • When you are done editing, simply close the page in the editor and Visual Studio .NET will save the item to the database using the stored procedure name. If there are any errors, the editor will report them to you so you can fix them before you save the sproc (see Figure 11 for an example).

Here's an example of a simple stored procedure to return a list of Topics.

   CREATE PROCEDURE TopicsGetList
   AS
      SET NOCOUNT ON -- don't return affected rows value
      SELECT 
         ID,
         Title,
         Description
      FROM 
         Topics
      ORDER BY 
         Title
      RETURN @@ERROR

There are a couple things worth pointing out in this example. First, notice theSET NOCOUNT ONline. This tells SQL Server to stop computing the number of affected rows for this query and to stop returning that value to the caller. It's extra work that isn't needed. Second, theRETURN @@ERRORline at the end is important. This returns the integer value of any error that occurs within SQL Server. You can use this for added diagnostics and error handling in the calling routines. You'll not do anything with this right away, but these are two good habits when creating your sprocs.

Now, here's a more complex stored procedure. This one is used to retrieve a single Topic record from the database. You'll notice some additional items including input parameters, output parameters to return the selected values, and some programming to check the input parameters and return an error if needed.

   CREATE PROCEDURE TopicsGetItem
      (
         @AdminCode char(3),
         @ID int,
         @Title varchar(30) OUTPUT,
         @Description varchar(500) OUTPUT
      )
   AS
      SET NOCOUNT ON -- don't return affected rows value
      -- make sure this is an admin user
      IF @AdminCode<>'adm'
         BEGIN
            RETURN 100 -- bad admin error
         END
      -- check to see if record exists
      IF (SELECT Count(ID) FROM Topics WHERE ID=@ID)=0
         BEGIN
            RETURN 101 --- bad id code
         END
      -- go ahead and return the record
      SELECT 
         @Title=Title, 
         @Description=Description
      FROM 
         Topics
      WHERE 
         ID=@ID
      -- return error or 0 if successful
      RETURN @@ERROR

There are a few other important items to point out in this example. First, you will see a list of parameters at the top of the procedure. All but the first two are marked asOUTPUTparameters. These are used to return the values of the selected record. It is more efficient to use return values for a single record than it is to return a collection of one record with all the fields.

Second, you'll notice the block of T-SQL that check the value of the@AdminCodeparameter to make sure the proper code was passed. If it is not correct, a return code of 100 is passed and execution of the procedure stops. Third, you can see the check of the@IDparameter to make sure it represents an existing record. If not, a return code of 101 is passed and execution halts. Finally, if the input arguments are all valid, the procedure attempts to select the record and return the values. If any errors occur at this point, the final line of the procedure handles them.

Note   Usually, it's a good idea to store any custom error codes and their meanings in a separate table in the database or in a flat file available to the solution. This makes it easy to update and share the error codes with other sub-systems in your solution. Since this is a small example and I'm only using two error codes, I've decided to create a document that contains the codes and messages so that any other sub-systems can refer to this document.

There are more than 25 stored procedures in the solution. I'll include just one more example here and you can download the rest of the code at the link indicated at the top of this article. This last example uses a custom in-line scalar function.

Using a Custom Scalar Function

Sometimes, a single stored procedure is not enough to solve a problem. For example, one of our user scenarios requires that we list the number of responses for a single question. One way to solve this problem would be to create a sub-query that counts the responses. However, another way is to create a custom function that returns a scalar value and include that in the question query. This has the added advantage of allowing us to re-use the scalar function in other stored procedures.

Adding a custom function is just like adding a stored procedure. You just right-click over the Functions node of the selected database in the Server Explorer tree and select New Scalar-Valued Function from the context menu. Then, edit the document in the editor and save it just as you do stored procedures.

Here's the code for the custom function:

CREATE FUNCTION dbo.fn_QuestionsGetResponseCount
   (
      @ID int
   )
RETURNS int
AS
   BEGIN
      DECLARE @ResponseCount int
      Set @ResponseCount = 
      (
         SELECT 
            COUNT(Responses.ID) 
         FROM 
            Responses
         WHERE
            Responses.QuestionID=@ID
      )
      RETURN @ResponseCount
   END

And here's a stored procedure that uses the custom function:

CREATE PROCEDURE QuestionsGetCountWithNoResponses
   (
      @Total int OUTPUT
   )
AS
   SET NOCOUNT ON -- don't return affected rows value

   SELECT 
      @Total=Count(ID)
   FROM 
      Questions
   WHERE 
      dbo.fn_QuestionsGetResponseCount(Questions.ID)=0

   RETURN @@ERROR

Now that you know how to author stored procedures and custom functions, there's just one more thing to cover when building your data-layer with Visual Studio .NET 2003—security.

Security Issues with IIS, ASP.NET and SQL Server

SQL Server, Internet Information Server, and the ASP.NET engine all have solid and reliable security models that can work well together. Microsoft also does a good job of making the default settings for each service relatively low in order to keep your data and your applications safe. The challenge most developers face is how to use SQL Server, IIS, and ASP.NET to set up the proper trusts between your application and your data without creating security breaches that would allow others to easily exploit. Since there are three services involved (SQL Server, IIS, ASP.NET), there are three key steps you need to take to properly secure your solution. For this installment, we'll cover one of the more common (and secure) ways of setting up sufficient rights and trusts for your Web application.

Note   The whole topic of security and Web solutions is more than can be adequately covered in this series. For better coverage of the issues and possible solutions, check out the Patterns and Practices Series Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication.

Defining the DotNetKB Custom IIS User Account

One of the safest ways to secure your Web application is to define a custom, limited-rights user and configure IIS to run as that custom user when executing your Web application. This is relatively easy to do and will make sure that anyone accessing your Web application will only be granted the rights you want them to have.

The first step is to create a new Windows user, in this case called DotNetKB, with a strong password and to add them to the Guest Windows Group. Also, be sure to select the check boxes labeled Password never expires and User cannot change password. This will create a user with limited rights that you can use as the identity when running your Web application in IIS (see Figure 7).

Aa478998.aspnet-jumpinto-part2-07(en-us,MSDN.10).gif

Figure 7. A created user with limited rights

Next, call up the Internet Information Server Manager and select the Web application that will host the pages. In this case, you can select the Web application that holds the test pages we built earlier (DotNetKB_WebSite). Right-click over the Web application in the treeview and select Properties... from the context menu. Then select Directory Security and click the Edit button in the Anonymous access and authentication control section of the dialog. Finally, enter the custom user name (DotNetKB), uncheck the Allow IIS to control password box, and enter the password for the custom user account. When all is done, click the OK button to save these changes to the IIS metabase (see Figure 8).

Aa478998.aspnet-jumpinto-part2-08(en-us,MSDN.10).gif

Figure 8. Authentication Methods dialog

Now, IIS runs under a custom, limited rights account. Whenever anyone accesses the Web pages for your application, they will be running as this custom user and will only have the authorization rights granted to this custom user.

Granting the DotNetKB User Account Access to SQL Server

Next, you want to grant that same custom user the proper rights to the database (DotNetKB). To do this, you can use Microsoft SQL Server Enterprise Manager or you can write a custom script to create the user and grant them access to the proper objects. In this article, we'll cover how to do it with SQL Server Enterprise Manager. You'll see a scripted example later, too.

Note   Although Visual Studio .NET 2003 has very powerful integration features for SQL Server, it does not allow you to manage the user and user permissions easily from within Visual Studio .NET 2003. In larger organizations and teams, these high-level tasks are usually handled by database administrators anyway.

So, after launching SQL Server Enterprise Manager, here's how you can add the custom user (DotNetKB) to our database (see Figure 9):

  • In the treeview on the left, expand the nodes to expose the DotNetKB database. On my machine, the tree is as follows: Console Root | SQL Server Group | (LOCAL) (Windows NT) | Databases | DotNetKB.

  • Next, right-click over the Users node under your database and select New Database User... When the Database User Properties - New User dialog appears, select <new> from the Login name drop-down box.

  • When the SQL Server Login Properties – New Login dialog appears, select the General tab and enter DotNetKB in the Name input box. Be sure the Windows Authentication radio button is selected and, in the Domain drop-down box, select the name of the machine that hosts the custom user account. Then select DotNetKB from the Database drop-down box.

  • Now select the Databases tab and locate the DotNetKB database in the list at the top of the dialog and select it. Next, be sure to select the public role in the list at the bottom of the dialog. Finally, click the OK button at the bottom of the dialog to save your changes.

    Aa478998.aspnet-jumpinto-part2-09(en-us,MSDN.10).gif

    Figure 9. Adding a custom user to the database

Next, you need to add execution rights for all the stored procedures and custom functions in the DotNetKB database. To do this, you just need to grant the rights to the public role. You could grant them to the DotNetKB user, but this will make it easier for future logins (once they have rights to the DotNetKB) to execute the procedures without needed to add new rights entries for each user.

Here are the steps for granting execute rights for the stored procedures and functions in the DotNetKB database:

  • Highlight the Users node under the DotNetKB database in the treeview to display the list of users for this database. Locate the DotNetKB user and double-click on it to bring up the Database Users Properties dialog.
  • With the public role highlighted (and checked), click on the Properties... button to open the Database Role Properties dialog. Now click on the Permissions... button to bring up the list of database objects and permissions settings.
  • Now, with the public role selected in the Database role drop-down list at the top of the dialog, locate all the stored procedures and the custom functions (you might need to expand the dialog to see the full names) defined for this database and be sure to select the EXECUTE check box next to each one. You may see some other check boxes checked on for some system objects—be sure to leave these alone.
  • Finally, after you've set all the EXECUTE rights, click the OK button to save the changes and dismiss the dialog. Continue clicking OK until all the dialogs are closed.

You've now created a custom user for IIS and applied the proper rights for that user in SQL Server. You now need to make one configuration change in your ASP.NET Web Project to make sure that ASP.NET executes all calls to SQL Server using that same user account.

Setting Your ASP.NET Application to Impersonate the DotNetKB User

The final step in creating a solid trust configuration for ASP.NET Web applications running under IIS is to configure the ASP.NET Web application to accept the Windows user identity passed from IIS and use to access other operating system resources. To do that, all you need to do is make a single-line entry in the root web.config file.

Note   Even though we've not actually built an ASP.NET Web application to hold our pages, you can use this information in the next section where we create some test pages to validate the functionality of the data access layer.

The modified web.config file looks like this:

<configuration>
  <system.web>
    ... other important stuff ...
    <identity impersonate="true"/> <!-- assume IIS user identity -->
    ... more important stuff ...
  </system.web>
</configuration>

Note that all you need to do is add the <identity> element and set the impersonate attribute to true. You do not need to enter a user account or password since that information will be supplied by IIS. This means that even if someone could read your configuration file, they would not know what identity credentials are used to execute your Web application.

You have built your custom user and set up the proper rights for both SQL Server and IIS. Now it's time to create some test pages to make sure it's all working properly. That's where you get to see it all working.

Creating ASP.NET Test Pages

It's always a good idea to create test pages to call into the SQL Server data layer and validate that input and output parameters are handled properly. This is usually the only way to be sure that future ASP.NET pages and components in the production solution will work as expected. This is especially true when validating trust boundaries and security issues when calling from one layer to another in the solution.

Also, when testing, it is important to not get tied up in building a production-like interface. You just want to test the target methods. In fact, it's a good strategy to purposely create 'ugly' test pages that you won't be tempted to try to pass off as the final production solution! For this article, I built really simple ASP.NET pages that contained a list of test records and a simple input form for adding, editing, and deleting test records.

For example, below is the WebForm layout for testing the Topic records. You'll notice that it contains a status label for any error or other messages; a record count label; a data grid to display the list of records; an input box that allows me to enter a record id to retrieve and a small table that will support adding, editing, and deleting records (see Figure 10).

Aa478998.aspnet-jumpinto-part2-10(en-us,MSDN.10).gif

Figure 10. WebForm layout for testing the Topic records

When you build your test pages, it's a good idea to keep them clean and simple. I usually add a small bit of code behind each button that calls into local methods to handle the database actions. Here's the code behind the Get Record button on the TopicTest.aspx page.

Private Sub btnGetTopic_Click( _
   ByVal sender As System.Object, _
   ByVal e As System.EventArgs) Handles btnGetTopic.Click
        Try
            Dim ID As Integer = Int32.Parse(txQueryID.Text)
            GetItem(ID) ' make database call
            txID.Text = txQueryID.Text
            txTitle.Text = mTitle
            txDescription.Text = mDescription
            lbStatus.Text = "success!"
        Catch ex As Exception
            lbStatus.Text = ex.Message
        End Try
    End Sub

Notice that the only real action in this method is handled by the GetItem(ID) method call. That makes the database call and sets the local variables with the returned values. Below is the code for that GetItem method. Notice it makes extensive use of SqlParameter objects to handle both input and output values.

Private Sub GetItem(ByVal ID As Integer)
    Try
        pr = New SqlParameter("RETURN_VALUE", SqlDbType.Int)
        pr.Direction = ParameterDirection.ReturnValue
        Dim pTitle As SqlParameter = New SqlParameter
        With pTitle
            .Direction = ParameterDirection.Output
            .DbType = DbType.String
            .ParameterName = "@Title"
            .Size = 30
        End With
        Dim pDescription As SqlParameter = New SqlParameter
        With pDescription
            .Direction = ParameterDirection.Output
            .DbType = DbType.String
            .ParameterName = "@Description"
            .Size = 500
        End With
        cd = New SqlCommand
        With cd
            .CommandText = "TopicsGetItem"
            .CommandType = CommandType.StoredProcedure
            .Parameters.Add(New SqlParameter("@AdminCode", "adm"))
            .Parameters.Add(New SqlParameter("@ID", ID))
            .Parameters.Add(pTitle)
            .Parameters.Add(pDescription)
            .Parameters.Add(pr)
            .Connection = cn
            .Connection.Open()
            .ExecuteNonQuery()
            .Connection.Close()
        End With
        ' check return code
        If Not pr.Value Is Nothing Then
           Select Case Int32.Parse(pr.Value)
             Case 100 : Throw New ApplicationException("Access violation")
             Case 101 : Throw New ApplicationException("Invalid ID")
           End Select
        End If
        ' set return values
        mTitle = pTitle.Value.ToString()
        mDescription = pDescription.Value.ToString()
    Catch ex As Exception
        Throw New Exception(ex.Message, ex)
    End Try
End Sub

Another important part of the GetItem method is the use of the return value parameter. It is declared in the first few lines and then checked after the stored procedure is executed. Notice that I'm checking for the known error codes 100, and 101. We'll cover how to handle other errors in a future installment where we build the full-blown middle-layer. But the point is, I am utilizing the return values and throwing a custom exception if needed.

For this example solution, I ended up creating six Web Forms to test nearly 30 stored procedures and custom functions. You can check out all the completed forms in the download package listed at the top of this article.

Now that we've defined the tables, created the stored procedures and functions, and built ASP.NET Web Forms, we can use Visual Studio .NET 2003 to generate an install script for the database layer that can be applied to production servers by the database administrators (or in some cases, by yourself).

Generating the Source Code and Install Script

Another great feature of Visual Studio .NET is the ability to generate a complete create script for an existing database. In fact, you can use Visual Studio .NET to generate source code for the entire database layer including table and index creation; rights grants, stored procedures, and so on. as well as a command line script that can be used to install all these database objects on an existing SQL Server.

Generating the install scripts is very easy. There are two steps. First, you need to generate the T-SQL scripts to create the database objects (tables, indexes, procedures, and so on). The second step is to generate a command-line script that executes the T-SQL scripts against the target SQL Server.

Generating the T-SQL Scripts

Before you can generate an install script, you need to generate a collection of scripts that cover the creation of all the objects in your database including the tables, indexes, constraints, users, and so on.

Aa478998.aspnet-jumpinto-part2-11(en-us,MSDN.10).gif

Figure 11. Generating a collection of scripts

Here are the steps for creating the T-SQL scripts:

  • In the Server Explorer, right-click over the selected database node (DotNetKB) and select Generate Create Script... from the context menu to bring up the Generate Create Scripts dialog.
  • On the General tab, select the Script all objects check box.
  • On the Formatting tab, select all the check boxes, except the last one (Only script 7.0 compatible features). This last item is only needed if your target server is SQL Server 7.0 instead of SQL Server 2000.
  • On the Options tab, in the Security Scripting Options section, select all but the Script SQL Server logins option. Be sure all boxes are selected in the Table Scripting Options section. And leave the default radio buttons for File Format and Files to Generate. Finally, when all is set correctly, click the OK button to start the script generation process.
  • You'll be prompted to indicate a location for the files. By default, Visual Studio .NET will point to the Create Scripts folder within your existing Database Project. Click the OK button to accept this default location.

When the process is completed, you'll have a list of files that hold each database object for the database. Even the script for creating the proper user and granting it the correct rights is included in this script collection. You can even store this information in Visual SourceSafe to manage any versioning issues over time. Finally, you can pass these files to others for updating and changing directly if you like. You now have the complete source code for your database layer.

Generating the Install Script

The final step is to get Visual Studio .NET 2003 to generate a command-line script that will read all the T-SQL scripts and run them against a target SQL Server. To do this, follow the steps below.

  • In the Solution Explorer, right-click over the project name (DotNetKB) and select Create Command File... from the context menu to bring up the Create Command File dialog.

  • If you wish, you can update the Name of Command File input box, select the appropriate authentication scheme (use Microsoft Windows® NT® unless you need to connect to the server remotely). Finally, click the Add All button to include all the T-SQL scripts in the install operation.

  • Then click the OK button to generate the script. This will load the completed script in the editor window (see Figure 12) where you can review it and close the window to save any changes.

    Aa478998.aspnet-jumpinto-part2-12(en-us,MSDN.10).gif

    Figure 12. Generating the install script

With this script in hand, along with the collection of T-SQL files, you can now install your new database layer on any target SQL Server 2000 to which you have the proper rights.

Summary

We covered a lot of material in this installment. You now know how to use Visual Studio .NET 2003 to create a Database Project. You know how to create a new database and a database diagram that defines the tables, indexes, constraints, and relations. You also learned how to use Visual Studio .NET 2003 to add stored procedures and custom functions that can access the data stored in the tables. You also learned how to set up a reliable trust relationship between the SQL Server and your ASP.NET solution using a custom Windows account and settings in both IIS and the web.config file for your Web application. Finally, you saw an example of test Web Forms to validate your data layer programming and how to generate T-SQL and command-line scripts that you can use to install this completed data layer in any target SQL Server.

You may have noticed that the process of storing and recalling Expert records was left out of the database layer here. That is because I decided to use an XML file instead. This gives us a chance to learn how to use XML as a data source and how to incorporate this data format with the SQL Server data to create a complete solution. In the next installment, we'll define the XML storage format as well as the component layer to read and write this data. Along the way, we'll learn about XML serialization and about the data-caching services built into ASP.NET.

Note   You can keep up-to-date on this project by visiting the related DotNetKB Web site. I post updates, respond to comments, and other material at this site in between publishing of these installments at MSDN®.

Mike Amundsen provides training, speaking, and consulting services. To learn more about him or to contact him, visit his site at http://amundsen.com. You can also find updates to this article and related material there.

© Microsoft Corporation. All rights reserved.