Share via


System.Data Run-Time Breaking Changes

 
Short Description RejectChanges() to have cascade semantics like AcceptChanges, it is to be independent of the current DataRow state.
Affected APIs DataTable.RejectChanges Severity Low Compat Switch Available No

Description Unlike DataRow.AcceptChanges, DataRow.RejectChanges does not cascade if parent row's state is Modified and it has child rows in 'Added' state. This means that call to RejectChanges will *not* cascade down from a modified parent. Cascading is conditional. It should be unconditional and have similar cascading semantics as "DataRow.AcceptChanges()".

User Scenario User invoking RejectChanges() on DataRow will see it get applied (as expected) on its descendants. If they were relying on it not affecting its descendants, then it may cause a change in their application behavior. (*Very Unlikely*)

Work Around User Can control the cascading behavior using the AcceptReject Cascade rule, and if does not desire to cascade RejectChanges then can set the Cascade rule appropriately.
 

 

 
Short Description V2.0 has a new set of Performance Counters that are specific to each provider; all counters in the .NET CLR Data category have been obsoleted.
Affected APIs None Severity Low Compat Switch Available No

Description The new perf counters in V2.0 will show more accurate numbers. In V1.1, these counters were never decremented and in order to "reset" them, the user had to end the process that was doing the work, wait 5-10 minutes, and then start with a clean set of performance counters. Otherwise the counters would just keep increasing for items such as open connections.  These performance counters have been obsoleted in V2.0 - they have not been removed but they will no longer be populated.

User Scenario Users that programatically retrieve performance counter information and users of enterprise management software that have rules to monitor .NET CLR Data counters would be broken by this change. Users that perform manual monitoring can simply choose a different counter from the performance monitor UI.

Work Around Change performance counter names
 

 

 
Short Description In 1.0 and 1.1 if a user does not specify a value or set the Size to zero on the Parameter object the Size was reported as Zero for OleDb, Odbc, OracleClient. Internally for these 3 providers, we inferred the size and when we bound the parameter to the underlying stream to send to the server, we inferred the size and reported that to the server. SqlClient inferred from the value if a value exists and reported this inferred size.
Affected APIs OleDbParameter.Precision, OleDbParameter.Scale, OdbcParameter.Precision, OdbcParameter.Scale, OracleParameter.Precision, OracleParameter.Scale, SqlParameter.Precision, SqlParameter.Scale ,IDbDataParameter.Precision, IDbDataParameter.Scale Severity Low Compat Switch Available No

Description We currently have separate Decimal and Scale properties on our IDbDataParmater and our new DbParameter class.  We do not need to have separate Decimal and Scale properties in our DbParameter classes, since all decimal numbers have built in scale and precision values. Having two different ways to set these values has been causing never ending grief to customers. There are many issues arising when these two values do not match with the decimal value. Some of our providers accept the value's numbers, while other try to truncate to fit into the numbers specified by the properties. In ODBC MP for instance, we truncate when we are inserting these decimals through regular insert statements, but we pass it along when using RPC calls.  Some customers expect us to do truncation, while others complain that we are losing their data.

User Scenario A user is setting the value of a Parameter to a decimal that has a precision of 10 and a scale of 15. When they set the precision and scale to their appropriate values, they will receive a warning that this is to be obsoleted and that the best practice is to have the provider and server infer the value.

Work Around For the short term, the properties will continue to exist on the interface and people can continue to use them.  The suggested guidance is twofold:  1) Users can set the precision and scale of the values by explicitly using the math classes to truncate or extend the digits after the decimal point and 2) users can handle specific precision and scale issues on the server side in stored procedures if they require some specific behavior that the server side demands.
 

 

 
Short Description In V1.1 DataTable.rows.Remove(row) does not actually remove rows that are in 'deleted' state. Removing is conditional, based on the state of the row being removed.
Affected APIs  DataTable.rows.Remove(DataRow row) Severity Low Compat Switch Available No

Description In V1.1 if remove is called on rows that are not in the deleted state they get removed; however calling remove on rows that are in the deleted state doesn't remove them. If a user calls Remove(row) then the row should get removed regardless of the state it is in.

User Scenario A developer has marked a row as 'deleted' and then later on, tries to  remove the row. The row will not get removed.

Work Around Do not call Remove() on deleted rows.
 

 

 
Short Description Introducing type checking for UDTs/Object, may break applications relying on *no* type checking.
Affected APIs All DataColumn access API. 1. DataTable.rows[rowIndex][columnIndex] and 2. DataRow.Add(Object[] values) Severity Low Compat Switch Available No

Description There is extremely limited support for UDTs in DataSet in the 1.1 release of the .NET Framework.  UDT/Object columns can be created, but there is no support for:  1)populating UDTs through DataAdapter, 2) UDT serilzation. UDT columns were treated as 'Object' for storage purposes. This allowed for assignment of any Object to a UDT's value. Type checking however is critical and it is also consistent with UDT support in SQL Server. V2.0 has been changed to allow a UDT value to only take on the value of the specified type or one of its derived types.

User Scenario User using DataRowCollection API to assign values to UDT/Object DataColumn(s).

Work Around User can choose to explicitly forego type checking by setting UDT dataType to 'Object'.
 

 

 
Short Description The new XML inference engine assigns different Column ordinals in some specific cases.
Affected APIs DataSet.ReadXml(), DataSet.InferXmlSchema() Severity Low Compat Switch Available No

Description The previous .NET Framework 1.1 XML Inference engine mapped XML elements directly to DataSet members like DataTable and DataColumn as each XML element was read. If an element appeared again in the document with a different structure, the corresponding DataTable was modified to include the new Columns or Child Elements. When this happened, the new column(s) were always appended and oridinal assigned. Thus column ordinals were a function of the position of the elements in the file rather than the content of the elements. The new Inference Engine in V2.0 fixes this issue and gives consistent ordering based on the structure (content) of the element.

User Scenario

A user who does *all 4* of the following: 

1) Uses XML Inference to build the initial schema for DataSet (i.e. DataTable's schema),

2) Expects Column ordinals to be consistent,

3) Uses column ordinals to access DataColumn,

4) The input xml file used for inference incrementally adds attributes/elements to a specific element. For instance, Element Customer, that gets mapped to DataTable: Customer has its Content Model (structure) built incrementally by adding new Attributes (i.e. "age"). 

<Customer name="John"> <Order orderID = "10"></Order> ... </Customer> <Customer name="Jack" age="30"> <State>CA</State> <Order orderID = "10" items="10" > ... </Customer>In V1.1 the ordering will be: 

  1. Customer.Hidden_PrimaryKey, ordinal = 0
  2. Customer.name, ordinal = 1
  3. Customer.age, ordinal = 2
  4. Customer.State, ordinal = 3

In V2.0 the new ordering will be: 

  1. Customer.State, ordinal = 0
  2. Customer.Hidden_PrimaryKey, ordinal = 1
  3. Customer.name, ordinal = 2
  4. Customer.age, ordinal = 3

Work Around If use applicaiton is doing *all four* thngs identified above, then it should either:  A. Switch to using column names instead of ordinals (recommended) or B. If for some reason ordinals have to be used, then modify application to use the new column ordinals
 

 

 
Short Description Deprecated unused property DataTable.DisplayExpression in V2.0.
Affected APIs DataTable.DisplayExpression {get; set;} Severity Low Compat Switch Available No

Description Deprecated unused property DataTable.DisplayExpression.  Currently controls that ship use the TableName for display purposes when binding to their control.

User Scenario Existing application code doing set/get on DisplayExpression. If deprecated, user will get compile time warning.

Work Around Remove calls to DisplayExpression and set the TableName property on the DataTable
 

 

 
Short Description DataRow is cleared when the value of one column (cell) is changed
Affected APIs DataTable.Rows.Clear() Severity Low Compat Switch Available No

Description See Title

User Scenario See Title

Work Around If contents of only attached rows have to be cleared (detached rows have to be preserved) then call DataTable.Rows.Clear(). If contents of all attached as well as detached rows have to be cleared then call DataTable.Clear()
 

 

 
Short Description XSD: a Fixed value compares to be the same as in the schema, even though it has multiple Cyrillic-E (0x0400) characeters
Affected APIs System.Xml.Schema Severity Very Low Compat Switch Available No

Description Comparing two values, one of which contains one or more Cyrillic-E characters (0x0400), produces wrong results (for example, we will treat "foo" and "fooЀ" as the same string). This also poses a security risk: we can potentially allow more characters than allowed in the fixed value in the schema.

User Scenario Anyone using fixed values for strings in their schemas while the data contains same strings plus additional characters that have zero weight

Work Around There is no workaround.
 

 

 
Short Description XmlDataDocument uses internal APIs to create partially initalized DataRows. A DataSet containing such rows if merged into an empty DataSet may result in a constriant violation exception in version 2.0. In v1.x, no exception was being thrown.
Affected APIs DataSet.Merge Severity Very Low Compat Switch Available No

Description

XmlDataDocument uses internal APIs to create partially initalized DataRows. The partial initialization for DataRows containing String columns can result in invalid rows in v1.x. The reason for invalid string column values stems from how v1.x stores NULL and DEFAULT column values. For all columns other than String the storage value for NULL and implicit SAME is same. However for String, the implicit default value is "null" and the NULL value is "DBNullValue" string constant.

The string column value is either a non-null value or a null value. i.e. the storage should be either user specified non null string or "DbNullValue". However, when internal API DataTable.CreateEmptyRow() is used by XmlDataDocument, the null string column values are stored as "null" instead of "DbNullValue". [The storage for string column value should never have "null" value in v1.x]. Now because CreateEmptyRow() does not do constraint checking, in this step no violation is detected in v1.x or v2.0.

[** Note in version 2.0, null values for string is stored as "null" instead of "DbNullValue" just like for all other DataTypes having "null" as a legal value in it's value space **]

AllowDbNull constraint:
If this column has a AllowDbNull constraint set to false (i.e. no nulls allowed), the check for null looks for "DbNullValue" string constant instead of "null" and cannot honor the constraint i.e. it allows columns to have null values.

Merge:
When DataSet X created by XmlDataDocument is merged into another empty DataSet Y, the merge process trusts the incoming rows and uses an internal API "MergeRow" that copies the exact column values [bypassing the regular row creation and initialization process] as the result the illegal "null" column value in v1.x gets SUCCESSFULLY copied over to dataset 'Y'. The null constraint checker does not find any NULL column values.

Version 1.1 Behavior:
DataRow with illegal column values created by XmlDataDocument's use of internal DataTable.CreateEmptyRow method is successfully merged into another DataSet. AllowDbNull constraint is compromised.

Version 2.0 Behavior:
AllowDbNull constraint is always honored and the constraint checking that happened at the end of the merge process throws exception.


User Scenario
  1. User populates XmlDataDocument which in turn creates rows in it's underlying DataSet. The DataSet has table with one or more String columns with AllowDbNull=false (i.e. column value cannot be null).
  2. While populating with XmlDataDocument, user does not specify an explicit value for the string column.
  3. User then merges the underlying DataSet to an empty DataSet. The merge is successful in v1.x but may throw constraint violation exception for v2.0.

Work Around Disable null constraint by setting DataColumn.AllowDbNull=true (which also happens to be the default)
 

 

 
Short Description Introducing type checking for UDT DataColumns may break applications relying on *no* type checking.
Affected APIs

All DataColumn access APIs:

  1. DataTable.rows[rowIndex][columnIndex]
  2. DataRow.Add(Object[] values)
  3. All other APIs which cause a column's values to get modified.
Severity Medium Compat Switch Available No

Description

There is extremely limited support for UDTs in DataSet for v1.0 and v1.1. UDT columns can be created, but there is no support for:

  • populating UDTs through DataAdapter
  • UDT serilzation
  • besides, it an un-documented feature

Given this, it is not a useful feature as shipped in v1.0/v1.1. UDT columns were treated as 'Object' for storage purposes. This allowed for assignment of any Object to UDT's value. Using untyped 'Object' storage for UDTs compromised 'type checking'.

We want to allow a UDT value to take on the value of the specified type or its derived types only.

Existing users can use UDTs only through in-memory operations; they cannot be persisted either to XML or database.


User Scenario User assigns arbitrary values to Columns of the following types: GUID Byte[] UDT // User defined class or struct. Note: For Byte[] columns, we mitigate the issue by trying to coerce the assigned value to Byte[]. So if user assignes Int to Byte[], it'd go through.

Work Around Workaround: User can choose to explicitly forgo type checking by setting UDT dataType to 'Object'.
 

 

 
Short Description The behavior of LinePosition in XmlTextReader.ReadChars method differs between v1.1 and v2.0.
Affected APIs XmlTextReader.ReadChars Severity Very Low Compat Switch Available No

Description The behavior of LinePosition in XmlTextReader.ReadChars method differs between v1.1 and v2.0. There was a bug in 1.1 that when the ReadChars method was called the LinePosition was not changed accordingly. We had decided to fix this bug assuming that since the line position was not changing, it did not provide much value anyway, and therefore not many people would be using it and be broken if we fixed it. The fixed 2.0 version gives the actual position of the reader in between ReadChars calls.

User Scenario A user reads characters using XmlTextReader.ReadChars method, and wants to get the line position.

Work Around No work around. The previous behavior was broken as the line position did not change as the characters were read.
 

 

 
Short Description Casting the return value from SqlHelper.ExecuteXmlReader to an XmlTextReader, worked reliably in v1.0 and v1.1. In V2.0, this may throw a cast exception, indicating the underlying type was an xmlTextReaderImpl.
Affected APIs SqlCommand.ExecuteXmlReader() Severity Medium Compat Switch Available No

Description

The user is casting the object that is returned from the ExecuteXmlReader api call to an XmlTextReader. In v1.1 this worked, however we changed the internal type from V1.1 to V2.0 for a number of reasons:

  1. Support for BinaryXml coming from the server.
  2. A security issue, resulting from the fact that the XmlTextReader exposes the Stream from which it is parsing.

Users should not be depending on the fact that the returned reader was an XmlTextReader.


User Scenario Execute a query against SQL Server that returns an XmlReader and try to cast it to an XmlTextReader.

Work Around Actually specify the type that is returned, instead of relying on the type being a derived type.
 

 

 
Short Description In V2.0, DataView[DataRowView.Index]== DataRowView while this was not guaranteed to V1.x
Affected APIs . DataRowView.GetHashCode() 2. DataRowView.Row 3. DataRowView.Delete() Severity Low Compat Switch Available No

Description

In v1.x a DataRowView was an immutable instance. DataView maintains a cache of DataRowViews and re-creates the complete cache (where size could be 1 to 1 million DataRowViews) whenver any dataRow in the underlying DataTable is changed.

In v1.x a DataRowView was an immutable instance whereas in v2.0 it is not the case. DataRowView.Row can get changed so that it is in sync with it's DataRowView.Index value. DataRowView contains 2 components:

  1. DataRowView.Index (private member) points to index of DataRowView as appearing in DataView[n]
  2. DataRowView.DataRow (public, read-only) refers to DataRow associated with position DataView[n]

For v2.0, DataRowView.Index is immutable whereas the DataRowView.DataRow can change to point to the current DataRow associated with DataView[n]

In V1.x, since the complete DataRowView cache was re-created on any change in DataView, the discarded DataRowInstances were not updated and could be stale like pointing to the wrong index or DataRow; they are kind of practically unusable.

In v 2.0, users can hold on to DataRowView, the discarded (if any) are marked as such to avoid inconsistency.

In addition, in V1.x DataRowView.GetHashCode always returns the same value (delegated to underlying DataRow and reference to it is immutable). In V2.0, as it's reference to DataRow can change, so can it's HashCode.

Affected APIs:

  1. DataRowView.GetHashCode()
  2. DataRowView.Row
  3. DataRowView.Delete()

User Scenario

Affected User Scenario(s):

  1. User creates DataView over DataTable.
  2. Keeps reference to DataRowView instances.
  3. Makes changes to underlying DataTable's DataRows.
  4. Uses the saved, stale DataRowView instances and invokes the APIs (1), (2) or (3).

The result is unpredictable as the instances are stale (more like stale c++ pointers) but may be repeatable for a fixed sequences of operations


Work Around

Workaround:

  1. Get the DataRowView at the disired position using DataView[index].
  2. Scan the DataView[] for locating the DataRowView with the desired DataRow.
  3. Hold reference to the DataRow object and use it as needed.
 

 

 
Short Description In v1.x DataSet.Merge(DataRow[] rows) used to use DataTable.TableName to lookup tables in DataSet whereas v 2.0 uses the combination of DataTable.Namespace and DataTable.Name to lookup tables. This results in a change of semantics for v2.0 Merge.
Affected APIs
  1. DataSet.Merge(DataSet dataSet)
  2. DataSet.Merge(DataTable dataTable)
  3. DataSet.Merge(DataRow[] rows)
Severity Medium Compat Switch Available No

Description

In v1.x DataTable support the notion of XSD Namespace but didn't attach any semantics to it. DataTable in DataSet were indintifiable by their Names alone, as it was a requirement that TableNames in a DataSet be unique. In v2.0, DataSet fully supports the notion of XSD namespaces, and the combination of DataTable.NameSpace and DataTable.TableName are required to uniquely identidy the DataTable in a DataSet.

For backwards compatibilty v2.0 supports lookups using TableName alone and works as v1.0 when 2 tables with the same name do not exist (as was the case in v1.x).

Merge in v2.0 only uses the Namespace for looking up the tables if the current DataSet has a non-empty namespace ad does a namespace agnostic lookup else it uses Namespace for lookups.


User Scenario
  1. User creates 2 different DataSets and assigns different namespaces to each of them.
  2. The user assumes v1.x behavior and expects DataSet.Merge to ignore Table namespace while matching tables
  3. User uses any of the 3 affected APIs v2.0 scenario would be different instead of merging the incoming rows to an existing table it would create a new table and inert the rows there.
  4. This only happens when the namespace on the current DataSet on which Merge is invoked has a non-empty namespace that is different than the Namespace of the incoming DataTable or DataSet.

Work Around
  1. Before Merge reset namespace of both DataSet to (empty) or some other string so that they have the same namespace and Restore namespaces post Merge.
  2. Create Method like:
  3. Public void MyMerge(DataSet targetds, DataRows[] rows
    )
       { DataTable table = targetds.Tables[rows.Table.TableName]; //
    ignore Namespace for table lookups.
       // Use requied LoadOption to get a more fine grain control on
    Merge
       For (int i=0; i<rows.Length; i++)
          Table.LoadDataRow(rows[i], LoadOption.Upsert);
    }