Formula:Formula CR 2

From XBRL

Contents

Feedback and resolutions

Issues requiring resolution

Responses pending approval by FWG

If approved, the described specification changes will be drafted up over the course of the following week. Generally, these responses have already been drafted into the specifications to ensure that the FWG can fully assess them.

None.

New features

None.

Removed features

None.

Changed features

None.

Normative Corrections

2009-04-08 Hitoshi Okumura: The definition of xbrlfe:sequenceSAVConflicts should be modified not to refer implicit filtering.

I think the following conformance-test should not detect this error.

11204 : Aspect rules in Formulae. Static Analysis V-01

Because the Variable spec [4.1 Binding as a sequence (bullet 3)] seems to guarantee that the evaluation result sequence have an equivalent aspect value for all of the other facts in the evaluation result, independently of using implicit filtering.

If my interpretation is right, the factor of implicit filtering should be removed from the definition of xbrlfe:sequenceSAVConflicts in the Formula spec[2.1.2.1 Required aspect values and sources].

  • Resolution: This error code is part of preventing situations where the SAV is ambiguous. It is ruling out situations in which the source identifies a fact variable that can evaluate to a sequence of facts in which there can be variation in the relevant aspect. You are right that uncovered aspects are matched across facts in the sequence regardles of whether implicit filtering is being used. Thus, the error code criteria will be reworded to not condition on the use of implicit filtering. It becomes:
 Error code xbrlfe:sequenceSAVConflicts MUST be thrown if a source contains the QName of 
 a fact variable that binds as a sequence unless the aspect rule addresses an aspect that 
 is not covered by a filter for the fact variable.
  • Herm: this correction has been approved to be inserted to the PR-2009-03-31 (in process of preparation for uploading)

2009-02-12 Geoff Shuetrim: formula:value element explanation is in terms of s-equal2 but this is not appropriate

The content of the formula:value element for typed dimension aspect rules should be aspect matching to the output value for that typed dimension aspect rather than s-equal2 to it.

  • Resolution: The wording of the specification has been altered to make clear that the operation of the formula:value style of typed dimension aspect rule operates on the basis of aspect matching rather than a notion of s-equality.

2009-02-11 GuoHui Chen, New Lido: formula:value's child <any maxOccurs="1"> in formula.xsd should be better

In the specification 's formula.xsd <element name="typedDimension"> <element name="value"> <any minOccurs="1" maxOccurs="unbounded" />

But also say: If an typed dimension rule contains a <formula:value> element then that element has a signle child element. the child element must be s-equal2 to the child element of the typed dimension vlaue for the output fact.

So I think the child element's @maxOccurs should be "1", not "unbounded".

  • Resolution: The schema has been adjusted as suggested.


2009-01-09 Nathan Summers: s-equal in Aspect Tests

The Complete and non-XDT segment/scenario aspect tests use a single xfi:s-equals to determine if aspects "match". s-equal returns false for identical XML Elements (since segment and scenario elements are not specially addressed in Table 4 of XBRL 2.1). I would think it is necessary for "matching" aspects to also include identical values.

  • Resolution: This is a good point where the current definition would prevent matches for aspects that are reported using the same context - a common situation. We have eliminated the usage of s-equality altogether, using the matching system applied for typed dimensions instead.

2009-01-09 Nathan Summers: Period Filters

The Variable spec explicitly states in 3.4.1 that "All filters MUST imply an XPath expression that can be evaluated using any fact as a context item." All filters described by the period filter specification imply an expression beginning with xfi:period, which is defined to only accept facts of type xbrli:item. These expressions must handle the possibility of "=.=" evaluating to an xbrli:tuple.

The inability for period filters to accept tuples is manifesting itself in the following test cases. Because the filter predicate expressions are joined in a non-specified manner, if the period filter gets placed before the concept filter, the test cases fail; if the concept filter is first, it eliminates all tuples and by the magic of short-circuiting, the period filter expression does not get evaluated. But even if there were a filter ordering mandated that would always put period filter expressions after concept ones, short-circuiting should not be relied upon because that behavior is not mandated by XPath. 12050 Period rules processing examples: V-07

     * 41210 Boolean And Filter Filters Processing: V-02 
     * 42130 Boolean Tandem Filters Processing: V-01

CONFORMANCE WORK TO DO: Make sure that these conformance suite tests are still OK.

  • Resolution: The suggested change to the function has been made to ensure unintended exceptions are not thrown when applying such filters with instances that contain tuples. The revised implied XPath expressions have forms similar to:
 if (fn:QName("http://www.xbrl.org/2003/instance","tuple") intersect xfi:concept-substitutions(fn:node-name(.))) 
 then false()
 else (xfi:period(.)[#test])

2009-01-09 Nathan Summers: Custom Function declarations

The variable spec states that a custom function is any function not defined in the XPath and XQuery functions specification. However, the specification seems to assume the presence of the XFI functions, and none of the test cases contain custom function declarations for XFI functions. Additionally, it would be helpful to make the xfi prefix reserved.

  • Resolution: This has been clarified in the text of the specification and the recommendation to reserve the XFI namespace has been adopted.

2009-01-09 Nathan Summers: Tuple Ancestor Filter

The implied filter expression is (fn:node-name(ancestor::*) eq #qnameOrQnameExpression. The problem is that fn:node-name can take only a single-node argument, whereas ancestor::* returns the sequence of all parent elements. I'm sure the intent is that the expression returns true if the element has any ancestor with the expressed node name, thus, we recommend implying this expression instead:

 fn:not(fn:empty(ancestor::*[fn:node-name(.) eq #qnameOrQnameExpression]))
  • Resolution: The suggested modification will be made.

2009-01-09 Nathan Summers: Explicit Dimension Aspect Test

The specification defines the aspect test for an explicit dimension as xfi:elements-correspond(xfi:fact-explicit-dimension-value($aspectTest:a,#dimension),xfi:fact-explicit-dimension-value($aspectTest:b,#dimension)) but there is no function named xfi:elements-correspond. Since the explicit dimension value is a QName, I'd expect a function to match the QNames' namespace URIs and local parts.

  • Resolution: The function name will be changed to nodes-correspond to align with the function registry. QName matching of namespaces and URIs is the intended outcome. Please check the relevant definition of the xfi:nodes-correspond function to confirm that these expectations are met in this regard.

2009-01-09 Nathan Summers: Entity Identifier Aspect Test

The Variables spec says that the entity identifier aspect test is: (xfi:identifier-scheme($aspectTest:a) eq xfi:identifier-scheme($aspectTest:b)) and (xfi:identifier-value($aspectTest:a) eq xfi:identifier-value($aspectTest:b))

If you look at the function definitions for xfi:identifier-scheme and xfi:identifier-value, they both take a single argument of type xbrli:identifier. Thus, the aspect test needs to be: (xfi:identifier-scheme(xfi:identifier($aspectTest:a)) eq xfi:identifier-scheme(xfi:identifier($aspectTest:b))) and (xfi:identifier-value(xfi:identifier($aspectTest:a)) eq xfi:identifier-value(xfi:identifier($aspectTest:b)))

To avoid double invocations of xfi:identifier for each variable, however, I would recommend the creation of functions

  1. xs:token=xfi:item-identifier-value(schema-element(xbrli:item))
  2. xs:anyURI=xfi:item-identifier-scheme(schema-element(xbrli:item))

so that the aspect test can be written as: (xfi:item-identifier-scheme($aspectTest:a) eq xfi:identifier-scheme($aspectTest:b)) and (xfi:item-identifier-value($aspectTest:a) eq xfi:identifier-value($aspectTest:b))

Alternatively, you could define an single identifier comparison function, e.g. xfi:item-identifiers-equal($aspectTest:a, $aspectTest:b)

  • Resolution: This has been an editing problem affecting a range of aspect tests. The suggested convenience functions will be defined and the aspect test will be defined in terms of them to ensure that the function arguments are appropriate.

FUNCTION REGISTRY WORK TO DO: Define the convenience functions described here

  • Herm 2009-03-25: Added functions 80142 xfi.fact-identifier-value and 80143 -scheme. Used name "fact-" for consistency with other functions named fact- instead of item-.

Drafting enhancements

2009-01-06 GuoHui Chen, New Lido: 'Target XBRL instance' is used incorrectly

Many times, RC2 say executing the XPath expression using the <xbrli:xbrl> element of the target XBRL instance as the context item. I always think, it is the input xbrl instance, or source xbrl instance, or the xbrl instance to be validated.

But In formula 1.0 2.1.2.2 Default aspect rules says: All formulae have a default location aspect rule, which requires output facts to be child elements of the target XBRL instance's <xbrli:xbrl>. I think, it is an error. The output facts should be child elements of the output XBRL instance

  • Resolution: This is a drafting error and the sentence has been amended to state that the output facts should be child elements of the output XBRL instance, not the target XBRL instance.

2009-01-08 FWG Call: The term target XBRL instance is misleading

2009-01-06 Herm: When teaching formula in classes I'm usually asked why the term "target" is used, people are thinking of geek terms where language processors consume source code to interpret them or emit output object code, or relationships with from and to ends are often called source and target ends. The concensus (of new students) seems that the term "source" applies better to the input instance. We also need a defined term for the output instance (none in the formula spec.). The dictionary definition http://dictionary.reference.com/browse/target (particularly its etymology) leaves me thinking it isn't the right word.

2009-01-08 FWG Call: Concensus to switch to "source" instead of "target".

  • Resolution: We have now replaced all usages of the term "target XBRL instance" with "input XBRL instance". The word "source" was eschewed because target and source are also used in relation to XLink relationships.

2009-01-14 GuoHui Chen, 'xfi:filter-member-network-selection' clarity

(sent from chengh@newlido.com via formula-feedback@xbrl.org)

The filter member network no longer pertains to dimension arcs, it is for any arc role. The function definition needs editing to make it more explicitly clear that the arcs do not have to be dimension arcs but can be any link role and arc role.

  • Resolution: There is no such thing as a dimension arc so there should be no implicit implication that the arcs have to be dimension arcs. However, in the interests of clarification, the wording of the function definition has been clarified as suggested. New test cases, 90213 v-13 & v-14, have also be added to reflect the generality of this function.

2009-01-09 Nathan Summers: Aspects Covered by Dimension Filters

The Dimension Filter spec makes no indication on what aspects a filter can cover. I presume a dimension filter cannot cover any aspect, but it would nice to have some normative text saying so, as is found in other specs (Implicit Filters, section 2, for example).

  • Resolution: The suggested change has been made, clarifying that dimension filters can cover XDT dimensions.

2009-01-09 Nathan Summers: Special Cases of Variable Set Evaluation/Non-Evaluation

In section 4 of Variables, the text prior to the special cases list reads "Except for the following two special cases"..., although the list contains three special cases.

The third special case in the list is quite wordy and uses a single, condition-laden sentence. More problematic, however, is that it doesn't specific what is to happen should that situation occur: does the variable set evaluate or does it not? I assume it means that the variable set does NOT evaluate, but some clarifying text, as found with the first two cases in the list, would be helpful.

The relevant text of the specification is:

If a variable set contains a fact variable, V, that has evaluated to a fallback value but that fact
variable could have evaluated to a non-fallback value without forcing a change in the value of any 
one of the other fact variables in the variable set that has not evaluated to a fallback value and
that does not have a dependency on fact variable V.
  • Resolution: The text will be changed to align with the number of special cases. Also, the drafting of the third special case will be clarified to specify that in such special cases, no variable set evaluation has been deemed to have occurred. It will become:
No variable set evaluation will have occurred if the variable set contains a fact variable, V, that
has evaluated to a fallback value but that fact variable could have evaluated to a non-fallback 
value without forcing a change in the value of any one of the other fact variables in the variable 
set that has not, itself, evaluated to a fallback value and that does not, itself, have a dependency
on fact variable V.

That remains quite a mouthful but is an accurate representation of the agreed approach to ensuring that fallback values have intuitive consequences for the sets of variable evaluations that can be performed for a given instance document.

2009-01-05 Herm Fischer: We need to be clearer about the validity requirements on inputs for formula and assertion processing

I know this has been discussed, I can't find where it is written, that formula processor input must be XML or XBRL valid (or may sneak in with some minor degree of invalidity)? For dimensions, I think at the moment formula processors takes whatever is physically in contexts (+defaults) with no notion of its XDT validity. Is there any statement that the source instance should/must be dimensionally valid?

  • Resolution: The variable specification has been augmented to tighten up its definition of the notion of validity of the target XBRL Instance in relation to the aspect model in use for a variable set. The variable specification will be modified to make explicit the following:
  1. all input XBRL instances MUST be XBRL 2.1 valid.
  2. variable sets using a given aspect model MUST be valid with respect to the specifications defining the aspects in the aspect model. Thus, the non-dimensional aspect model only requires XBRL 2.1 validity while the dimensional aspect model would also require XDT validity.
  3. Aspect model definitions MUST be explicit about the specifications that define the aspects in them.

2009-01-05 Herm I am not happy just 'resolving' instances to MUST to be valid, but think we need to either enforce or otherwise provide a mechanism to ensure consistent behavior. This arises because a customer, finding xml schema validation error messages from xerces to be unfit for human consumption, found that XPE is able to process schema-invalid data as $factvar/text() (not PSVI-typed) and generate humanly consumable assertion messages (see http://wiki.xbrl.org/wiki/Messages_on_assertion_results). Either that is allowed, or it is prohibited, and then I should provide conformance tests for some situations to be sure that formula processors all have the same reaction when attempting to bind a formula to data in an instance that is xerces-invalid, or maybe dimensionally-invalid (if anybody cares about dimensional invalidity in processing instances). So I propose to generate new test cases after we define consistent behavior, (a) bad dates in date and month-day items, (b) schema-pattern-failing string values in token string items, (c) dimensions with schema-valid members but dimensionally-invalid to hypercubes

2009-01-06 Geoff Shuetrim: What is inconsistent about requiring valid target instances? The use case provided strikes me as bizarre. The user's problem is one of interpretability of Xerces error messages. The proposed solution seems to be to use formulae to generate error messages instead. I think that is well outside the scope and the target problem area for of the formula specification. If XML Schema validation reporting needs to be enhanced, do that directly (and quite probably in a proprietary way) - not via formulae. If we remove the foundation of XBRL validity from target XBRL instances, there is not much we can rely on and inconsistent behaviours will be much more of a problem for us than they are today.

2009-01-08 FWG Call: There is agreement with Geoff's proposed resolution above.

Other comments

2009-03-26 Andy Harris: Suggested static syntax errors might be important for processors

I would like all formula processors to produce a similar set of model-related static errors. These errors are triggered during the compilation of the formulae.

For example)

  • Arcs have wrong end points
  • Duplicate resources by XML ID or label.
  • Equivalent formula arcs
  • Many more (others need to help define the conditions)

Many of these errors cause indeterminate results in a formula processor. The formula author does not know immediately why the formula processing is misguided. A formula author has to be an expert to know what the problem is, and even then, it takes a long time to find the error in the formula authoring. Furthermore, all formula processors should behave identically for these severe conditions.

  • FWG Resolution This will be moved to a usage pattern discussion.

2009-01-09 Nathan Summers: Custom Typed Dimension Equivalence Definitions

I don't see any conformance tests involving equivalence definitions for custom typed dimension definitions.

  • 2009-02-12 Herm: Is 23020 v50-v53 doing what you are asking about?
  • 2009-03-26 Herm: Nathan replied by e-mail: "I followed up with the engineer that made that comment, and he replied that 23020 v50-v53 are sufficient for his needs."

2009-01-09 Nathan Summers: Relative URI Resolution within XPath Expressions

The following linkbase files contain embedded XPath expressions which attempt to open documents by passing a relative URI to XPath's doc() function. The way our processor behaves, it resolves URIs based on the directory from which the java process is initially executed. Thus, these testcases fail. They would pass if we were to resolve against the URI of the directory the formula linkbase resides in. The spec should either a) specify what is the appropriate base URI to use when resolving relative URIs in XPath expressions or b) state that relative URI resolution behavior is implementation dependent. Because XQuery 1.0/XPath 2.0 essentially lets the behavior of fn:doc be implementation-dependent, I would lean towards option b) and possibly go as far as to discourage the use of fn:doc within formulae. 20000 Variables/22010-Variable-Processing-XPathUsage/22010-table-lookup-xml-doc-formula.xml

     * 20000 Variables/22030-Variable-Processing-GeneralVariables/22030-fvToGv-lookupTableRow-formula.xml
     * 10000 Formula/12060-Formula-Processing-OCCRules/12060-source-scen-repl-node-formula.xml 
     * 10000 Formula/12060-Formula-Processing-OCCRules/12060-source-seg-repl-node-formula.xml
  • 2009-02-05 RH In the light of the new XML Base specification I would prefer to require the @xml:base attribute to be in the document header for all formula documents containing the fn:doc(). That would solve the relative URI problem.
  • Resolution: (Drawing together the analysis from Herm) fn:doc($uri as xs:string?) recommends but does not insist on using fn:resolve-uri($relative as xs:string?) as xs:anyURI? to resolve its URI argument against "the base URI property of the relevant node in the source document". This will be made a requirement rather than a recommendation in the wording of the variable specification. It will also be clarified that the base URI property of the relevant node in the source document will be determined in accordance with the XML Base specification. The Variables specification will also define relevant node in the source document. This entire issue will be addressed in a new part of section 1.7 of the Variables Specification that specifically defines the evaluation context for XPath expressions. This will go through all parts of the evaluation context defined in the XPath 2.0 specification specifying how parts of the evaluation context will default. This should clear up all of the ambiguities surrounding such issues at the moment.

Herm: added 22010 V-13 - V-15, test case variations examine xml:base behavior, as used by fn:doc, and V-16 examines fn:default-collation().

  • 2009-03-26 Herm: The variables.xml spec has the collation paragraph in final shape, thanks, Geoff!!!

2009-02-05 Hitoshi Okumura: Ambiguity issue

When facts are interpreted dimensionally,sometimes dimension-defaults may cause ambiguity where fact's dimension axes set can't be decided uniquely.This situation causes xbrlve:ambiguousAspects error according to the current spec.

As concrete error situations, the following cases are included.

  • variable evaluation (substantially, implicit/relative filter processing)
Dimension-defaults may cause ambiguity for the aspects of facts in the dimensional aspect model
  • output of source fact
conformance-test[22161:Evaluation processing ambiguous aspects examples]

But when we think of the real XBRL contents such as US-GAAP, FINREP,etc, this xbrlve:ambiguousAspects error is problematic. Because dimension-defaults are used generally in these real XBRL contents and the xbrlve:ambiguousAspects error seems to be easy to occur for them.


The following are problematic examples from FINREP and US-GAAP

The provided sample instance(InstanceNEW.xbrl) has many facts which have dimensional ambiguity.

The following is an example of ambiguous facts.

...
<context id="I-2006-12">
  <entity>
    <identifier scheme="http://www.example.com">TestData</identifier>
  </entity>
  <period>
    <instant>2006-12-31</instant>
  </period>
</context>
<ifrs-gp:FinancialAssetsHeldForTradingEquityInstruments contextRef="I-2006-12"
  unitRef="U-Monetary" decimals="2">300
</ifrs-gp:FinancialAssetsHeldForTradingEquityInstruments>
...

Dimension definitions of the above fact:

(a)Role1 : http://www.c-ebs.org/eu/fr/esrs/finrep/2008-01-01/role/ADimensionalItems
   Hc:EmptyHypercube (close)  (has no dimensions)
(b)Role2 : http://www.c-ebs.org/eu/fr/esrs/finrep/2008-01-01/role/BalanceSheet
   Hc:HcEquityInstrumentsHypercube (close)
    - Ex:ByEquityInstrumentsDimension (has dimension-defaults)

(a) and (b) both can be dimensional interpretation to the above fact.

  • 2009-02-05 Herm: The new dimensions-default interpretation (base spec option 1, defaults apply globally without regard to closed or open hypercubes) causes a problem with Finrep. Finrep is being redesigned right now, and I expect that the new Finrep will take care of this situation. It is not a problem of the formula spec, but of the taxonomy authors. However the problem only impacts dimensional validation. For formula processing the dimension aspects experienced by any fact are not ambiguous, they are exactly what is in the fact's context plus the global defaults, with no regard to the hypercubes or dimension arcs.
  • problematic example: US-GAAP

Ambiguous facts in dimensional interpretation can be generated with US-GAAP Taxonomy(2009PublicReview.zip).

An example of the ambiguous instance is in the sharepoint area.See US-GAAP-ambiguous-sample.zip

To simplify, I created this sample instance with only problematic parts as below.

- referring us-gaap-ci-stm-2008-10-31.xsd as Taxonomy.
- including only ambiguous facts whose concept is StockholdersEquityIncludingPortionAttributableToNoncontrollingInterest.


The following is an example of ambiguous facts.

...
<context id="I2006">
  <entity>
    <identifier scheme="http://test">1</identifier>
  </entity>
  <period>
    <instant>2006-12-31</instant>
  </period>
</context>
<unit id="unit">
  <measure>iso4217:USD</measure>
</unit>
<us-gaap:StockholdersEquityIncludingPortionAttributableToNoncontrollingInterest decimals="0" 
  contextRef="I2006" unitRef="unit">30
</us-gaap:StockholdersEquityIncludingPortionAttributableToNoncontrollingInterest>
...

Dimension definitions of the above fact:

(a)Role1: http://xbrl.us/us-gaap/role/statement/StatementOfShareholdersEquityAndOtherComprehensiveIncome
   Hc:StatementTable
    - Ex:StatementScenarioAxis         (has dimension-default)
    - Ex:StatementEquitycomponentsAxis (has dimension-default)
    - Ex:StatementClassOfStockAxis     (has dimension-default)
(b)Role2: http://xbrl.us/us-gaap/role/statement/StatementOfFinancialPositionClassified
   Hc:StatementTable
    - Ex:StatementScenarioAxis         (has dimension-default)
    - Ex:StatementClassOfStockAxis     (has dimension-default)

(a) and (b) both can be dimensional interpretation to the above fact.

  • 2009-02-05 Herm: Dimension defaults are now global (base spec change) and occur always with no concern for extended link roles. I think that every fact now always experiences context dimensions plus, for any dimension not physically in the context, all global dimension defaults, with this new change.

[solution comment]
We may need additional information to the formula definition to decide dimension axes set uniquely.

Usually, deciding the extended linkrole and the hypercube name enable identifying dimension axes set, that is, the dimensional unique interpretation for facts.This is general in normal dimension processing, so this seems to be applicable to formula processing.

How about specifying either or both of the following (a) and (b) in formula definition?

(a) extended link role : URI (regular-expression)
(b) hypercube : QName

definition example:

<formula:formula ... aspectModel="dimensional"
   dimensionRoleURI   ="^http://xbrl\.org/formula/conformance/example/role/linkFor3DimsInSeg$"
   dimensionHypercube ="eg:HyperCube1" 
>
   ...
</formula:formula>

Note:

- If only extended link role is specified, hypercube is any one of 'all' hypercubes under specified extended linkrole.
- If only hypercube is specified, the extended linkrole is any one of extended linkroles which has specified hypercube.
- dimensionRoleURI and dimensionHypercube attribute are optional. They are valid only when aspectModel is "dimensional".
- The dimension axes set of comparing facts or source facts is decided with the specified extended linkrole and hypercubue.
  • 2009-02-05 Herm: I think with the current spec interpretation this is no longer an issue, because now a fact's dimensional aspects are exactly what is in the context plus the global dimension defaults, with no regard to hypercubes or dimension arcs in the taxonomy.
  • 2009-02-05 WcH: Yes; if every axis has a global default, the default is unique, and there is no need to specify role nor hypercube element.
  • Resolution: No changes to the specifications in this CR 2 are required. This issue was recognised in the Fujitsu feedback to CR 1 and led to the introduction of the xbrlve:ambiguousAspects error. That formula response was to push responsibility for elimination of ambiguity back onto the specifications underpinning the aspect model in use. This has contributed to the review of dimension default values in the base specification working group. A key feature of that review is that the dimensional aspect model interpretation of facts is being modified to ensure that there is no situation in which a single fact can be ambiguous in terms of the aspects for which it has values.
  • 2009-02-12 Okumura: Basically, I agree. According to these interpretations, will conformance-test 22161 V04 (or maybe all 22161 tests) be removed?
  • FWG Resolution: Done: 22161 has been has been commented out from index file and variations commented out in testcase file. Ambiguity error is kept because future aspects (of future modules) may have a use for this error.


2009-03-20 Herm, infrastructure/test.xsd parameter/@name, defined wrong

In this file, element parameter, attribute name, is defined as xs:QName, but should be variable:QName, because it falls under variables spec, sect 1.7, paragraph "attributes with values that are variable QNames". When (if) I fix this, then test case variations 22020 Parameters.xml (v-06, v-07) may be revised, or some new ones added.

Geoff Shuetrim: I do not see why the variable specification covers this schema at all. It is a schema that is used for defining the content of the conformance suite rather than a schema that makes up part of an XBRL DTS with formula content in it. It seems inappropriate to use types from the variable specification in it. Can you elucidate the nature of the problem you are running into with this file in some more detail?

Herm: You are right, I was for a moment thinking the parameter name, like variable name, was variable:QName, but it's an ordinary QName, so parameter/@name is correct. Also test cases v-06 and v-07 are correct as they are. Point withdrawn.

  • Resolution: No specification changes are required.

2009-03-20 GuoHui Chen, segment and scenario remainder filters are needed

Please consider defining a filter that covers segment and scenario remainder aspects. Alternatively do not define segment and scenario remainder aspects.

  • Resolution: Using non XDT content in segments and scenarios that also contain XDT content is bad practice and not to be encouraged. The formula and related specifications define segment and scenario remainder aspects because the XDT specification does not prohibit such content. However, filters are not provided because the specifications are trying not to encourage such usage. Should good use cases emerge for such filters, then appropriate filter definitions will be considered at that time but not as part of the current specifications package.

2009-02-27 Herm: Use cases for CR2 completion

I've posted use cases for a new proposal to handle formula chaining, tuple generation, and multi-instance processing. In addition this seems to be compatible with an important upcoming requirement for formula processing of very large instances (new use case page).

The 90502 (linkbase relationship) and 90601 (multi instance) proposal and prototypes have a new proposal for revising them. Linkbase processing can have simplified returns of relationship nodes instead of mixed subsequences of destination concept QNames and arc attribute values. Multi-instance processing can advance from low level XPath coding in general variables to full use of factVariable and filters (proposed by Muramoto).

The multi-instance proposal then seems a much better solution to use case requirements of formula chaining and tuple generation, and those wiki pages have been revised. The use case of A = B + C and C = D + E and Corep 18, become much simpler to develop and maintain, or it seems so at the moment to me.

Because the prior multi-instance approach (90601) was successfully prototyped, I feel the new proposal layers on what was learned in coding up 90601, and remains feasible (but more work, as syntax is involved).

This leads to a solution proposal to very large instances formula processing.

I believe no changes are required to CR2.

Please understand that the additional areas of functionality, described on use case pages, are not intended for CR3 and must be considered as independent modules that extend formula processing for separate roadmaps.

We might, for example, progress the CR2 to PR to Rec on current schedule, and begin IWD drafting of the use case functionalities in several independent paths. Each such path would independently track from IWD - PWD - CR{1-n} - PR - Rec, without disturbing a basic formula Rec.

2009-02-20 GuoHui Chen: Covering the unit aspect for variables evaluating to non-numeric facts is counter-intuitive

Reworded by Geoff Shuetrim: In formulae with variables that evaluate to numeric and non-numeric items, implicit filtering forces formula authors to cover the unit aspect for the non-numeric items. The unit aspect can only be covered by specifying a unit filter for the non-numeric item and this is cumbersome and non-intuitive given the design of the existing unit filters. It requires a boolean filter in combination with the unit filter so unit aspects are covered without having the unit filter actually doing any filtering.

  • Resolution: A new unit filter will be created. It will be called a non-numeric filter and it will select only those facts that are non-numeric. For the sake of completeness we will also introduce a fraction filter that covers the unit aspect. It will select only those facts that are fraction item types or derived from the fraction item type. These new filters (or equivalents) will be created via new filter specifications in an approval track separate from that of the specifications covered by this CR 2 set of specifications.

FWG Question: Should these be deferred to a separate approval stream or are they sufficiently urgent that we need to push them through as part of the unit filter specification?

  • Herm 2009-03-10: We proposed (a few months ago) to do a "cover-this-aspect" filter after CR2. I think what you describe is one of them. Like a cover-the-nonXDT-segment filter, or cover-this-dimension. Can this new non-numeric-unit-covering filter go into that post-CR2 cover-stuff filter?

2009-02-18 Herm: The xs:precisionDecimal data type

XML 1.1 includes a precisionDecimal datatype, which allows decimal numbers to have exponents like float/double, +/- INF, NaN, and precision.

This has an impact on formula, but is this for CR2 or later? Are we implementing XML Schema 1.1?.

The treatment of this data type is what fact items call decimals, but it's sort of opposite of what XBRL does in calculation roll-up. Precision is always inferred from the lexical number representation, not from an attribute.

"In general, the precision of the result of a decimal operation is at least as great as the greater of the precisions of its operands. For example, if (1000 with precision -3) is added to (.001 with precision 3), the result is (1000.001 with precision 3). Each operator (addition, multiplication, etc.) defines its own precision rules. It is also necessary to deal with many special cases (what is the result of -INF modulus negative zero, etc.) Again, this can be done by reference to IEEE 754r. "

Geoff Shuetrim: Where is this quote from? The rule seems to be inappropriately non-conservative.

I think there are three issues, (1) internal formula use of this data type, (2) possible XBRL use as an item type, and (3) whether new test cases are now needed.

  • Resolution: No specification or conformance suite changes are required. We are extending XBRL 2.1 which is based on XML Schema 1.0. The new precisionDecimal data type is out of scope for this set of specifications.


2009-02-12 CGH Chen: Output dimension value ordering is indeterminate

Formula Specification OCC uncovered source dimension rules processing examples V-05:Assembling dimensions from various variable uncovered aspects In the expected output instance: dimension order: TypedDim1 ExplDim1 ExplDim2, why? why not order: ExplDim1, ExplDim2, TypedDim1 (dimension in the hupercube order role="linkFor3DimsInSeg") why not order: TypedDim1, ExplDim2, ExplDim1 (uncovered dimension of variables by evaluation order (v1, v2, v3 is the formula bind variable order, document order))

The specification doesn't give any hint about multiple dimension output order. Different formula processors will output different contexts which have different dimension order, so contexts s-equal = false

The specification only pays attention to the OCC output order, but says nothing about the dimension output order.

Herm 2009-02-12: Only d-equality could be expected of output facts that had the same dimensions. The dimensions spec allows dimensions to be in any ordering in context containers. The formula spec furthermore allows dimensions to be in either container (by their aspect treatment).

  • Resolution: No changes to the specifications are required. s-equality is not a goal for output facts. Where s-equality is of interest, some kind of canonicalisation of the output instance will be required. Having formulae specific output dimension orderings based on such information as order attributes on hypercube-dimension relationships will not be sufficient to avoid such canonicalisation requirements. Such an approach is not even viable given the possibility of conflicting orders being defined by different hypercubes.

2009-02-05 Hitoshi Okumura: Sequences with duplicates

The variable spec '4.1 Binding as a sequence' says that the evaluation result MUST include all possible facts from the source sequence that meet the conditions.

What are the evaluation results of the fact variable $c1 in the following situation?

fact variable $c1: bindAsSequence='true', matches='false'

$c1's source sequence : {factA1, factA2, factB1, factB2, factC1, factC2}
  where
    - factA1,factA2 are aspect-matched facts. 
    - factB1,factB2 are aspect-matched facts. 
    - factC1,factC2 are aspect-matched facts. 
    - factA*, factB*, factC* are not aspect-matched facts. 

According to the spec, all combinations of every fact in each aspect-matched facts seem to be evaluated as evaluation results. That is, the evaluation results are the following 8(=2*2*2). Is this interpretation correct?

evaluation results:
   {factA1, factB1, factC1}
   {factA1, factB1, factC2}
   {factA1, factB2, factC1}
   {factA1, factB2, factC2}
   {factA2, factB1, factC1}
   {factA2, factB1, factC2}
   {factA2, factB2, factC1}
   {factA2, factB2, factC2}

Specifically, I put small test contents in the sharepoint area.
See sequences-with-duplicates-testcase.zip (These are made by altering 22170 test contents.)

Geoff Shuetrim: 2009-02-06: The above analysis seems correct to me.

They include two cases, though it may not have to separate.

(1) Instance includes aspect-matched facts with different values.
 Formula    : evaluationResults-formula.xml
 Instance   : evaluationResults-instance.xml
 Result     : evaluationResults-result.xml
(2) Instance includes aspect-matched facts with same values.
 Formula    : evaluationResults-formula.xml
 Instance   : evaluationResults-instance2.xml
 Result     : evaluationResults-result2.xml

Are their results correct?

Geoff Shuetrim: 2009-02-06: Eyeballing the files, I have one concern - it is not evident to me that we can rely on the ordering of the facts in the sequences that variable $factvar1 has evaluated to and this will lead to potential application dependencies in terms of the orderings of the letters in the computed results. That is not a formula specification problem so much as a possible problem with the specific formula. Otherwise the results look fine to me.

  • Resolution: No specification changes are required. Aspect matched facts do not have to be in agreement as to their values. Enabling different evaluations when there are such facts in an instance ensures that all such aspect matched facts can be tested against a set of assertions. Some may pass and others may fail, depending on their values.

Okumura: 2009-02-12: CONFORMANCE WORK TO DO: I hope that the conformance-tests will be expanded about these cases.


2009-01-09 Nathan Summers: Ambiguous Validation Errors

Some testcases expect a particular error, but the test data could validly report two or more static analysis errors. The spec does not provide any guidance on error priority. 21363 Unresolved Dependency V-01

     * 21363 Unresolved Dependency V-02
     * 21363 Unresolved Dependency V-03
  • Resolution: The extraneous errors have been removed.

2009-01-09 Nathan Summers: Dimension Container Placement in Result Documents

The results documents for 12062 V-03 and V-04's expect the dimension containers to appear within scenario (as they are in the input). From an aspect matching perspective (and a dimensional perspective for that matter), it's perfectly valid to have these containers output into in the segment. The results document should be updated with an additional fact and context with the dimension container in the segment. 10000 Formula/12062-Formula-Processing-DimensionUncovered/12062-source-scen-explicit-result.xml

    * 10000 Formula/12062-Formula-Processing-DimensionUncovered/12062-source-scen-typed-result.xml


CONFORMANCE WORK TO DO: address this concern by modifying the conformance suite documentation to indicate that decisions regarding conformance with conformance suite tests should not be sensitive to the container for dimensions.

GEOFF SPEC TWEAKING TO DO??? probably this documentation goes in the documentation instead of conformance test cases internal commentry, wherever it should or does say that results files are compared, to indicate that nobody cares about container location of dimensions, e.g., if the expected results and produced results are not putting into the same dimension container (for those taxonomies where everything is open, unvalidated, or allows result dimension in both containers).

  • Resolution: Dimension container-specific tests have been removed. Dimensions track the base-spec WG.

2009-01-09 Nathan Summers: Erroneous File Extension

22180 Bind To Empty Sequence.xml: In Variations "V-51" through "V-57", the linkbase document finrep/22180-finrepFallback-test2-formula.xml is incorrectly referenced as finrep/22180-finrepFallback-test2-formula.xsd (wrong file extension).

  • 2009-02-05 Herm fix checked into svn

2009-01-09 Nathan Summers: Bind-as-sequence Anchoring

There are a handful of tests that rely on binding-as-sequences in a specific manner. According to our interpretation of the spec, you can more or less bind a source sequence of nodes however you want provided the resulting sequence's nodes match according to their aspects and uncovered aspect values. Our processor always takes the first node of the source sequence and attempts to match the other nodes against it's aspect and aspect values--the first node is the "anchor" node, so to speak. It appears that for these tests you must "anchor" the sequence on something other than the first node in order for them to pass; our approach produces result sequences of length 1, which causes preconditions requiring the sequence length be greater than 1 to evaluate to false. Thus, our formula produces no fact. We presume that a fact must be produced in order to pass the test (it's not entirely clear what constitutes a passing result document--right now we consider it a success when one of the result document's facts match the fact produced by our formula; if the result document has one or more facts and our formula fails to produce one, we consider it a fail). 2170 Bind as sequence processing tests: V-11

     * 2170 Bind as sequence processing tests: V-12 
     * 2170 Bind as sequence processing tests: V-13
  • 2009-02-12 Herm: Definition of passing result document:
  Documents pass (with no tuple output):
  
  xfi:v-equal-set( 
       (fn:doc( produced-instance-uri )/xbrli:xbrl/*
                 [namespace-uri() ne 'http://www.xbrl.org/2003/instance' and
                  namespace-uri() ne 'http://www.xbrl.org/2003/linkbase']),
       (fn:doc( expected-result-uri )/xbrli:xbrl/*
                 [namespace-uri() ne 'http://www.xbrl.org/2003/instance' and
                  namespace-uri() ne 'http://www.xbrl.org/2003/linkbase'])
                  )

2009-01-09 Nathan Summers: Incomplete xsi:schemaLocation

The following files in folder 20000 Variables/22180-Variable-Processing-BindEmpty lack necessary schemaLocation references necessary for schema validation (should you attempt to validate these documents against the schemas referenced by this attribute). Although this is inconsistent with the rest of the conformance suite, these files shouldn't be necessary to evaluate the formulae within these documents. for-case1.xml

     * for-case1b.xml 
     * for-case2.xml 
     * for-case2b.xml 
     *
  • 2009-02-12 Herm: Fix checked into SVN, but really is superfluous; a DTS-aware XML validator should be aware of all discovered namespaces and their schemalocations. FRIS practices are to never provide schemalocation hints for discovered namespaces, and I don't see why such practice is also good to tidy up formula linkbases.
  • I suggest discussion of a way to 'clean up' and make consistent schemaLocation hints (as spec v2 issue).

2009-01-05 FWG Conference Call: Should we require validation of the target instance?

There is no FWG consensus that formula processors should be required to validate input source instance documents as a step before formula processing (as that would restrict construction of formula processors to require XBRL processors). There was some discussion in the WG about whether to have test cases enforcing formula processors to reject invalid inputs. This led to agreement not to include such test cases.

It was noted that processors which ignore XML schema or XBRL 2.1 errors and initiate formula processing anyway shouldn't do that (but is beyond the FWG to police this).

  • Resolution: No changes required.

2009-01-06 GuoHui Chen, New Lido: segment and scenario filters say: segment filters can cover the complete segment aspect, scenario filter can cover the complete scenario aspect. Which filter will can cover non-XDT segment aspect or non-XDT scenario aspect?

(sent from chengh@newlido.com via formula-feedback@xbrl.org)

I think: when the variable set's aspect model is '®non-dimensional', the segment filter can cover complete segment aspect, and scenario filter can cover complete scenario aspect. When the variable set's aspect model is 'dimensional', the segment filter can cover non-XDT segment aspect, and scenario filter can cover non-XDT scenario aspect. Please make it clear in the specification.

2009-01-06 Herm: we had discussed an extension feature (after or separate from CR2) to say 'cover this aspect'

2009-01-08 FWG Call:

  1. The suggestion of the segment/scenario filters to cover non-XDT portions in the dimensional aspect model is not allowed, that would raise the xbrlve:filterAspectModelMismatch error.
  2. The extension feature under discussion, to cover non-XDT aspects, will provide a way to cover this aspect, but no compelling customer use case urgency has yet come up to accelerate this.
  3. In currently deployed formula applications, general filters have been used to discover contexts with non-XDT portions (which in this use case required checking of all contexts even if un-used by fact items).
  • Resolution: No changes are required to CR2.

2009-01-05 FWG Conference Call: Should we require validation of the output instance?

Discussion of validation of result output facts produced by formulas also decided that there no requirement of a formula processor to conduct such validation. Discussion of chaining also noted that the current proposal on companion wiki page chains mostly be sharing variables which aren't schema-validated, and possible sharing of resulting fact bound to a fact variable, which also isn't going to be schema- or XBRL2.1-validated before or during sharing.

  • Resolution: No changes required.

Conformance suite feedback

None.