Taking Advantage of PEAR

Taking Advantage of PEAR

Taking Advantage of PEAR

All programmers know that code reuse is the key to an efficient and timely, not to mention sane, application development process. Yet far too few practice what is so often preached, much to the chagrin of colleagues, clients, and project managers everywhere. Such irrationality is compounded by the fact that many languages enjoy hyperactive user communities, who regularly churn out large amounts of code and make it available for others to freely adapt to their own needs.

PHP users have a particularly useful trove of code at their disposal, made available through the PHP Extension and Application Repository, better known as PEAR). Containing over 400 packages categorized under 40 different topics, and growing all the time, taking advantage of this community-driven service will save you countless hours of programming time. PEAR packages are porno available for accomplishing everything from creating barcode images, compressing files, abstracting database access, documenting code, and much more. In this article, I’ll show you how to begin taking advantage of PEAR by introducing the PEAR Package Manager, and then demonstrating two useful packages.

Using PEAR

PEAR has become such an important aspect of efficient PHP programming that it’s been included with the distribution since version 4.3.0. I’ll assume you’re using this version or greater; if not, you’ll need to install PEAR or upgrade to a later PHP version. See the PEAR manual for more information about the installation procedure.

You interact with PEAR using the PEAR Package Manager. It allows you to browse and search the contributions, view recent releases, and download packages. It executes via the command-line, using the following syntax:
Executing this command will produce a list of supported commands and usage information. Most users will only regularly use a subset of these commands, most notably:, info: Display information about an installed package, install: Install a package, list: List installed packages
list-all: List all packages found in the repository, list-upgrades: List all available upgrades, search: Search the repository, uninstall: Uninstall a package
To demonstrate the power of PEAR, I’d like to call attention to a package that I think exemplifies why you should regularly look to the repository before attempting to resolve any significant programming task. While some might consider this particular choice of package a tad odd, I’d like to call attention to it because the package both solves a particularly tricky problem, and is representative of a problem many would think few uncommon, and therefore not bother to search for an available solution. The package is Numbers_Roman, and makes converting Arabic numerals to Roman and vice versa a snap.

Suppose you were recently hired to create a new website for a movie producer. As we all know, any serious producer uses Roman numerals to represent years, and the product manager tells you that any date found on the website must appear in this format. Take a moment to think about this, as it isn’t as easy as it may sound. Of course, you could look up a conversion table online and hard code the values, but how would you ensure the site copyright year in the page footer was always up to date? You’re just about to settle in on a long evening of coding when you pause for a moment to consider whether somebody else has encountered a similar problem. Certainly taking a quick moment to search PEAR would be worth the trouble? You navigate over and sure enough, encounter Numbers_Roman.

For the purposes of this exercise, I’ll assume the package has been installed on the server. Don’t worry too much about this right now; you’ll learn how to install packages in the next section. So how would you go about making sure the current year is displayed in the footer? For the year 2005, this script would produce: Copyright © MMV The moral of this story? Even though you may think the problem may be obscure, its almost a given that other programmers have faced a similar problem, and if you’re fortunate enough, a solution is readily available and yours for the taking.

Creating HTML Forms

Creating HTML forms in an orderly manner can be tedious, particularly when the need arises to populate the form with dynamic data, for instance that retrieved from a database. Most developers choose to escape in and out of PHP code, outputting the dynamic data at the appropriate locations. While this works, the code can become quite messy. What if there was a more effective way? A quick review of PEAR turns up HTML_Quickform, and it makes forms creation trivial, not to mention data validation. Go ahead and install HTML_Quickform, passing the -a option to the install command in order to make sure all necessary dependencies are also installed.

For the first example, let’s just create a simple form consisting of input fields for the user’s name and email address:

What if you wanted to auto-populate the form with some data retrieved from a database? Just create an associative array using keys that match the form element names, and then call the setDefaults() method prior to rendering the form. The modified script follows, with the new lines highlighted in bold:
This is just a glancing introduction to HTML_QuickForm, and is intended to give you an idea of the powerful packages available to you by way of PEAR. Take some time to peruse the package directory located at, I’m positive you’ll be impressed at what you find!

About the Author

W. Jason Gilmore is an open source editor for Apress. He’s the author of the best-selling Beginning PHP 5 and MySQL: Novice to Professional. His writings on open source technologies have been featured within many of the computing industry’s leading publications, including Linux Magazine, O’Reillynet, Devshed, and Zend.com. Jason loves receiving e-mail; so don’t hesitate to write him at Languages & Tools Archives

Unicode Support

Unicode Support

Unicode Support

If one of your requirements is to support multiple languages, you must familiarize yourself with supporting Unicode data. An in-depth discussion of Unicode support is outside the scope of this article. Besides, Joel Spolsky has already said it best. What you should know is how Unicode affects your database design (data types used), how it affects the encoding used for your Web application pages, and how you’ll need to handle localization data differently for client software versus a Web application. It’s definitely something to research if localization and internationalization (i18n) aren’t familiar topics to you.

API Requirements

Another feature of enterprise software that can make it appealing to an organization—especially one that employs developers—is a publicly exposed, documented API. What this means to your customers is that they can build their own customized applications and have them integrate with your system. Here are some general rules to follow:

Document your API: This will benefit both you and your customers. By documenting your API, it will help reinforce your understanding of the system as well as make you evaluate the design of your system.
Provide an SDK: An SDK with sample application code can go a long way towards making a large client feel like they’re buying an open, extensible system that they can write custom modules for. Providing an SDK with sample applications also allows you to “dog-food” your API and evaluate its quality. Software Configuration/Settings

Most systems require network and local machine-specific settings to run properly. E-mail addresses, IP addresses, directories, themes, intervals, timeouts, licensing…the list goes on. Where and how you store these settings is not as important as how you make them accessible to an admin user. I like to allow these settings to be configured during setup and then also later on with an administrative “control panel” feature. With client software, I like to try and avoid standard users being able to access the control panel application.

DOs DON’Ts
Provide a control panel where these settings can be adjusted. Require setup to define success or failure based on whether settings are provided or not. They might not be known at setup time. Provide security over who can access this control panel. Make users update settings directly in the database with a tool like TOAD, SQL Plus, Query Analyzer, or Enterprise Manager. Encrypt settings in the database that contain sensitive information. Allow settings to become corrupted or lost. Wrap setting update actions within transactions. Provide a chance to configure settings during setup. Force refusal of settings values because of validation failure. Remember, your validation might not always work in every environment.
Attempt to validate settings (ping IP addresses, check for directories specified, query domain for e-mail aliases, and the like) Provide helpful tool-tips and descriptive names for system settings. Not everyone will have the same interpretation of certain words. Avoid using acronyms.
Deployment Requirements

You want prospective clients to be able to try out your system without too much help from you. The perception that your software is easy to use starts with the setup process (some might say it starts when they visit your Web site to download it). Any computer-savvy customer should be able to get themselves up and running with your setup without getting you involved. Take a look at what that involves:

Avoid Database Scripts: You should keep DDL and DML scripts around for your own development and testing purposes, but never give them to your clients for them to create a database for your software. Doing so is fraught with possible headaches. Some clients might have zealous DBAs who think certain data types should never be used and change them in your scripts before executing them. It sounds funny, but I’ve actually seen it happen. So, if you don’t give them scripts then, what?
Create a Database Setup: Allow the DBA or whoever has administrative database privileges to run a setup program to create the database. You can point the setup program to a separate file that contains the database scripts if you want; but if you do, make sure the scripts are encrypted and decrypt them inside the setup program. This will help you avoid problems like the one mentioned above and the user will be none the wiser. Your database setup program should create the database and all the database objects. If you want to get fancy, and you don’t have a separate database requirement, you can allow the user to specify which existing database they would like your application’s database objects created in. This is why it’s good practice to give database objects a small system-specific prefix for enterprise software so that the names are less likely to clash with existing objects.
Always Make Rebooting an Option: If you have a desktop client setup that requires a reboot of the machine after completion, make sure it allows the user to end the setup and reboot their machine manually. If your setup causes someone to lose their work (even if accidentally because they weren’t paying attention), your program is already an enemy before they’ve used it. Personally, I prefer setup programs that inform me I need to reboot but don’t do so on my behalf.
Web Application Setup: If you have a Web application setup, make sure it creates any required settings. For example, if your ASP.NET application’s virtual directory needs to be configured to run as an IIS application, make sure your setup creates it as such. If your Web application requires a database connection string or other sensitive information stored in a configuration file, make sure the setup prompts the user for the required information and encrypts it in the configuration file for them. You don’t want users at any level exposed to the inner workings of your system configuration.
Licensing: If you want people to be able to try your software before buying it, make sure that updating the license information is a simple process. Don’t make your setup program be the only place that licensing information can be input.
Web UI vs. Thick Client: This is an important decision and will have deep ramifications for deployment options. Be sure that you’re making your choices for the right reasons. Technologies such as DHTML are making Web applications more appealing because of the robust interface capabilities coupled with extremely simplistic deployment. However, Click-Once Deployment for .NET applications is attempting to bring that same deployment simplicity to Windows desktop software. Click-Once is great, but unfortunately it has a long way to go to convince both developers and managers who have suffered through DLL-hell for almost a decade now. Having said that, if your application requires a rich graphical interactive UI, Click-Once might help resolve those DLL-hell induced headaches for you.
Architect Knowledge/Experience

Among those working in software, some take the title of ‘Architect’ more seriously than others. This can make it difficult to judge whether someone claiming to be an architect is the real deal. To be sure the person in charge of system architecture has the skills needed to hit the ground running, here are some things to focus on.

Has architected software systems before.
Has first hand knowledge of platform weaknesses/idiosyncrasies and how to overcome them.
Can effectively communicate the architecture of the system to the developers who will be implementing it.
Can handle the business side of software as well as the technical side (for example, working with project managers and other stakeholders).
Knows first-hand the strengths and weaknesses of many different commercial application platforms and their components.
Can properly identify cases when transactions, queuing, caching, messaging, and other system behaviors are necessary.
Has dealt with standards compliance before (ISO, ASNI, IEEE, and so forth)

Enterprise Development Planning

Security Requirements

Security Requirements

The natural progression in the career of most developers goes from uncomplicated procedural programming and desktop applications that use only the resources of the local computer (“monolithic applications”)—to object-oriented programming and wide-ranging enterprise software systems spread across possibly thousands of computers encompassing multiple physical locations. Accordingly, enterprise development introduces developers and architects to obstacles they won’t find with desktop development. Much like human beings, software is said to have “matured” as it can do more and becomes more reliable and more robust.

So, take a look at what separates the enterprise from the desktop. Having recently spent several months prototyping, developing, testing, and deploying an enterprise software system aimed at Fortune 500 businesses, here are some concerns I had to address up front:

Scalability Requirements (handling hundreds of concurrent users)
DBMS Neutrality (must work with any OLEDB data source)
Distributed Architecture Requirements
Security Requirements (including accessibility/visibility of data by users)
Unicode Support Requirements API Requirements (for third-party application integration) Software Configuration/Settings Deployment Requirements (Setup, Web UI vs. Thick “Fat” client) Architect Experience/Knowledge (Distributed Software Architectures) Developer Experience/Knowledge (OOP, COM, .NET, SQL, and so forth)
Development Methodologies/Practices/Tools
This is not a comprehensive checklist for enterprise development by any means, but it serves the purpose of showing that enterprise development is difficult and requires some forethought. This list does not include schedule, budget, or resource constraints—which usually add more complexity to the project.

As you may have noticed, I listed architect experience separate from developer experience. I did this because I have seen projects with brilliant developers fail because the person that architected the system did a poor job. It is much easier to overcome a lack in development experience than a lack in architecture experience. Good architects are rare and possess more soft skills that, in many cases, can only be learned, not taught. Development experience is not so fickle and most times can be remedied by books, training, or mentoring. Although a discussion of hiring and interviewing practices is outside the scope of this article, it suffices to mention that getting the right people for the job can make or break a project.

Now, let me enumerate the list in more detail, starting from the top. My goal is to cover a wide range of topics, thus I will abstain from going into any real depth on the particulars. I would recommend consulting Google or Amazon.com for resources specific to any topic discussed hereafter. Scalability Requirements

If you’re building a software system that could, for any conceivable reason, end up needing to support many concurrent users, you’re going to want your system to be as scalable as possible. Don’t make the mistake of not anticipating the number of users to grow. Even if your system starts off supporting a small user base, you probably will not want to re-design the system a year later when that same group has 500 people, all depending on your system to do their jobs. Here are some development tactics that will help you achieve scalability for applications, no matter what size:

High Availability State Servers: If you are building a Web application, make sure you are using a state server that scales. In other words, you must be able to manage state across a Web farm (sometimes referred to as a “web cluster”). Instead of using the default in-process state handlers provided by ASP.NET, PHP, ColdFusion and other Web development application platforms, look into a state server that uses a database, an out-of-process state server, or a third-party state server. This will give you the ability to restart your Web server without losing session state data. In some environments, this is not just important, but required by the nature of the data being handled and the potential length of user sessions. When session state is being managed on a separate machine, keep in mind issues such as network latency and security. An ideal setup is for the state server machine to be on a private network accessible by the Web server. You’ll also want to make use of connection pooling if you use a database as your state server.
Database Connections: The rule with database connections is, “Acquire Late, Release Early.” That is, don’t open your database connections until absolutely required, and close them as soon as possible. You might also hear this rule said as “Get in, then get out!” in regards to retrieving data. Avoid server-side cursors and operations that require keeping an open database connection. There’s also a chance you’ll make friends with a DBA or two when developing with this mindset.
Distributed Architecture: Typically, the more you can spread out the computing load across machines, the more you can scale. Ultimately, each application and network topology contributes to a unique environment that determines requirements. Having said that, you will generally be best equipped to scale if the components from each logical tier of your application can be configured to run either on the same machine or on a different machine for each tier. I will expand on this further in a section devoted to this topic.
DBMS Neutrality

Every enterprise-class system I’ve built or used stored its data in a relational database. Depending on the market or industry you’re targeting with the software you’re building, you might be fortunate enough to support only a single DBMS. If you’re not so lucky—and few of us are—you’re going to need a strategy for supporting multiple database systems with a single code base. What helps you do this?

Avoid ODBC: ODBC Drivers have limitations dealing with large binary data types. Use OLEDB drivers instead. If you’re going to use ODBC anyway, consider providing a tool that will allow you to switch drivers. You never know what data types you could need in the future.
Decouple your SQL: Most developers have heard the rule of not hard-coding SQL into your code. So what should you do then? One strategy is to store all of the DDL and DML in a separate file accessible by your application. Ideally, this will be an XML (.config) or INI file. This file can contain all your SQL statements for any DBMS you support. You can also use a set of characters as a replacement token in your SQL file so that you can have parameterized queries. For example, (“SELECT Field1, Field2 FROM MyTable WHERE Field3 = ‘~XYZ~’ “) where ~XYZ~ would be the token replaced by a value you supply at run-time. I’ve done this before and results were an XML file about 200K in size that contained over 500 queries. I spent about two hours to develop a utility that helped me easily manage the XML query file during development. This approach also allows you to fix SQL bugs without recompiling any of your code. Don’t forget to wrap your important database actions within transactions.
Avoid Stored Procedures: This is controversial advice, no doubt, but I’m definitely not the only one advising this. When the majority of your SQL is ad-hoc, using stored procedures will not provide a significant performance or security benefit. Additionally, if you have to support more than one DBMS, (for example, SQL Server and Oracle) you’ll have to translate all of your stored procedure T-SQL to PL/SQL. Sometimes it makes sense to include specific T-SQL or PL/SQL functions in your SQL statements—like CONVERT() or TO_DATE() to handle dates—but use them sparingly. Also note I have used the word ‘avoid’ with regard to stored procedures. Some situations might require their use and you just have to bite the bullet.
Research DBMS Differences: Familiarize yourself (or your team) with the differences that exist among the popular database systems. For example, SQL Server’s auto-incrementing (Identity) fields don’t exist in Oracle. Instead, you’ll have to use Oracle sequences along with insert triggers to automatically get a primary key value inserted. Knowing these idiosyncrasies will allow you to make design accommodations for them early.
Distributed Architecture Requirements

Until the last five years or so, developing, testing, and deploying distributed applications was one of the most complicated things you could attempt to do (some might argue that it still is). You have to worry about the reliability of the network and the performance of communicating with remote servers compared to local machine processes. During the initial stages of system design, you have some choices to make with regard to your architecture.

Require client-side database drivers? With a Web UI, the answer is an obvious no (with the possible exception of embedded ActiveX controls). For a thick client, the answer is not so clear cut. If you don’t want to require client-side database drivers, you’ll need a way to instantiate objects remotely from a machine that has the database drivers—then pass them to the client. This can be done with technologies such as .NET Remoting, DCOM(COM+), Java RMI, or CORBA. The distributed technology you choose will most likely depend on your development platform.
Which Protocol/Port? Most distributed technologies allow you to use different ports (or channels) to pass serialized objects back and forth. Should the application be able to work over the Internet or only within an intranet? Does your application need to bypass firewalls?
Deployment Risks: Most distributed objects must be registered somehow on client machines. This has the potential to make deployment more complex. You’ll need a strategy to keep the deployment risks minimal.
Interoperability Requirements: Discussing interoperability could take up an entire article (or book) by itself, so I’ll summarize by mentioning that you want to use the simplest data types possible at your endpoints. Avoid returning types such as DataSet from your endpoints if interoperability is a goal of your system.
Security Requirements

Commonly, software security means authentication and authorization. Make the user prove who they are (authenticate) and then check their privileges (authorize). These two ideas, along with user roles and groups, have been around for a long time and are pretty much assumed to be in place for any enterprise software package. What is emerging more recently, however, is smart, flexible data security. Organizations want to be able to define (authorize) who gets to see the many pieces of data they collect. To help them achieve this with your software, here are some ideas:

Support Active Directory: Not only do users have enough user name/password combinations to remember already but, if your system requires managing its own user names and passwords, you’re potentially requiring someone to spend several hours inputting names and, essentially, duplicating existing data. Try to avoid this.
Don’t Require Active Directory: Don’t forget that some networks aren’t running Active Directory. Have a configurable backup plan for managing user information.
Simple and Extendable: Keep the security model simple and extendable. One of the most easily maintainable security models I’ve implemented consisted of defining all actions that could be taken against entities in the system (View, Edit, Create, Delete, and so on) and then allowing the admin user to define whether those actions could be taken (or not) for each entity. A “Role” then consisted of the combined settings for allowable actions upon entities. Users then could be added to roles. If for some reason we wanted to add another entity to the system, it was simply a matter of adding a row for the entity to the correct database table. All actions for the entity were already defined.
The other point to be made here is the word data in flexible data security. Allowing admin users to define wildcard strings to be used in data retrieval (SQL statements) provides the utmost power. For example, allowing an admin to express the requirement that users in the operations role should only see rows where a particular field starts with the letters ‘OPS’.

Maintain Consistent Standards: If administrators of your system are network administrators also, they’re going to be used to things like denied permissions overriding approved permissions. Make sure your security works the same way to avoid confusion.
Use Application Roles: Using application roles for your database connections helps further security by allowing you to define permissions for everyone with a single account. It’s also possible to define database-object level permission with an application role to avoid costly accidents involving deletion of data. Create a standard user application role and deny delete permissions on any tables that standard users should never need to delete information from.

SQL Profiler Tips and Tricks

SQL Profiler Tips and Tricks

SQL Profiler Tips and Tricks

By Mike Gunderlo Go to page: 1 2 Next

If you’re a developer who works with SQL Server, you probably already know about SQL Profiler, the graphical tool that lets you capture and analyzer SQL Server events in real time. If you’re not familiar with this tool, check out my previous article, “Introduction to SQL Profiler”. This time around, I’m going to drill a bit deeper into this useful tool, offering you ten more bits of SQL Profiler that you might not have noticed already.

1. Using Existing Templates

When you fire up SQL Profiler and tell it to create a new trace, it pre-selects some events, data columns, and filters for you. Before you start fine-tuning this selection, you should know that the SQL Server team has already saved some useful starting points for you in the form of trace templates. Instead of creating a new trace, select File, Open, Trace Template. SQL Server ships with trace templates for various purposes including simply counting stored procedures, tuning your SQL statements, and measuring the duration of T-SQL statements.

2. Creating Your Own Templates

Of course, the built-in trace templates won’t be perfect for everything you want to do. Sooner or later you’ll find yourself carefully crafting a SQL Profiler trace that has the exact combination of events, filters, and data columns that you need to diagnose some common problem within your own organization. When you do, you can stash this combination away for the future as a trace template of your own. Just stop the trace and select File, Save As, Trace Template and assign it a memorable name. The next time you need the same combination you can open up the saved template and have it instantly available.

3. Saving to a Table

One of the nice things about the SQL Profiler engine is that you can either capture data for interactive analysis right on screen, or save it for later inspection – and you don’t have to decide up front which of those things you’re going to do. Better yet, you can save a trace in the most natural possible place: right in a SQL Server table! When you’ve created a trace that warrants keeping around for future analysis, stop the trace and select File, Save As, Trace Table. You’ll be presented with the Connect to SQL Server dialog box so that you can choose the server where you want to save the trace (this doesn’t have to be the same server that you’re profiling). Then select a database and either select an existing table or type the name of a new table, as shown in Figure 1. Click OK to save the trace. Saving a trace to a SQL Server table
4. Replaying a Trace

Saving your traces to a table enables one of the other exciting features of SQL Profiler: replaying traces. Select File, Open, Trace Table and choose a server to connect to that has a saved trace table. Open the trace table. Now you’ll find that the commands on the Replay menu are active. Choose Replay, Start and SQL Profiler will let you choose a server to be the target of the replayed activity. Figure 2 shows the options that you can set for a replay.

Replaying a saved trace
Why would you want to replay a trace? Suppose you’re debugging a problem with one of your servers – say, clients are deadlocking when running a particular set of queries. You can run SQL Profiler to capture a trace of the client activity that includes the deadlocks, and then rework the stored procedures on the server that you think are causing the deadlock. Replay the stored trace, and you can see whether your fixes were effective in preventing the problem from happening again.

To replay a trace, SQL Server must have certain event classes and data columns in the trace. The easiest way to make sure you have the minimum set in your trace is to start with the SQLProfilerTSQL_Replay trace template, which is one of the ones that’s installed with SQL Server.

5. Using Breakpoints

When you’re replaying traces, you can use some standard debugging tools to view selected events in slow motion. Place the cursor on any line in the trace and then click F9 to set a breakpoint (or click F9 a second time to clear an existing breakpoint). Then you can click F5 to run the trace to the next breakpoint. At that point you can use the F10 key to execute the trace one statement at a time. Alternatively, you can use Ctrl+F10 to execute all statements up to the current cursor location.

6. Locating Deadlock Causes

If you’re having intermittent deadlock problems, it can be tough to figure out where they’re coming from. This is especially true if your server is busy: how do you even spot the deadlocks doing by? SQL Profiler can help you here. Set up a trace and in the Events selection expand the Locks group, then select the Lock:Deadlock and Lock:Deadlock Chain events. If you monitor for these two events, SQL Profiler will produce a trace that contains details on just the deadlocks on your server. Record whatever identifying information you want – for example, the application name, logon name, and so on – and you’ll be well on your way to figure out where the culprits are.

7. Auditing Logins

How about tracking the user activity on your SQL Server? You can use SQL Profiler for this too. Again, the key lies in properly choosing the events that you profile. Set up a trace that monitors the Security Audit:Audit Login and Security Audit:Audit Logout events, and send it to a trace table. Then you’ll have a persistent record right in your database of who was using the database and when they were using it.

8. Watching for Table Scans

Another good use of SQL Profiler is to find queries that are causing table scans on your server. You want SQL Server to be using indexes to find the data that your users require – not to be looking through every row! To check, turn on a trace with the MISC:Execution Plan event. This will capture the query execution plan for every query. Then look over these plans for any that include a “Table Scan” or “Clustered Index Scan” (which indicates that the server is scanning all rows in the index, not that it’s using the index to find a particular row). You can then examine those particular queries in more detail to see whether adding additional indexes to your database’s tables could make them more efficient in the future.

9. Using Keyboard Shortcuts

Like any other Windows application, SQL Profiler supports keyboard shortcuts for more efficient use. If you use this tool frequently, you’ll probably want to memorize some of the most useful ones:

Ctrl+Shift+Delete to clear the current trace window, Ctrl+F to open the Find dialog box, F3 to find the next match, Shift+F3 to find the previous match, Ctrl+N to open a new trace, F5 to start a replay, Shift+F5 to stop a replay, F9 to toggle a breakpoint, F10 to single-step, Ctrl+F10 to run to the cursor
10. Choosing Data Columns

Finally, remember that you don’t always have to accept the default data columns for your traces. You might find that SQL Server suggests too much or too little data for your tastes. In many cases, the NT User Name and application name will be irrelevant for troubleshooting SQL issues, and you can remove them from the data to avoid cluttering up the display. On the other hand, if you’re tracing events for multiple databases, you’ll probably want to throw in the database name. There’s a lot of other information available, from the duration of the event to the name of the object affected by the current statement, so take a look at the list before you just blindly click OK. And Yes, There’s More

As you should be able to tell by now, SQL Profiler is a vital tool for diagnosing SQL Server issues of all types. When you’re trying to figure out what the heck is going on, especially with a heavily-loaded server, a well-chosen trace can help you pick out just the key events that you need to diagnose a problem. Spend some time to get acquainted with the events that you can trace and the data that you can capture, and you’ll find many uses for this tool in the future.

Mike Gunderloy is the author of over 20 books and numerous articles on development topics, and the lead developer for Larkware. Check out his recent book, Coder to Developer from Sybex. When he’s not writing code, Mike putters in the garden on his farm in eastern Washington state.