Last week I was assigned to work on a simple yet interesting problem. MongoDB stores data as JSON. But it turns out we often have customers - especially in the important financial services market segment - where data is in XML. (Yes, SOAP still exists too!) To store that data into MongoDB, we need to transform it into JSON.
(For the impatient reader: here's the Github link!)
Storing XML as a text field
There are various ways one could do this. For example one could simply store the XML document as a whole in a single field, and then extract just a few parts of the XML that are stored as individual JSON keys. One reason to do this is that they can then be indexed and used in queries.
For example:
{ _id : ObjectId(...), firstName : "Henrik", lastName : "Ingo", age : 36, originalXml : "<person> <firstName>Henrik</firstName> <lastName>Ingo</lastName> <height unit="cm">183</height> <age>36</age> <shoeSize unit="EU">44</shoeSize> </person>" }
The above MongoDB document could be searched for firstName
, lastName
and age
and can return the original XML document found. But you could not index or query for shoeSize
or height
because they exist only in the XML and MongoDB is completely ignorant of them.
Translating XML into equivalent JSON
In the project I was assigned to however, we wanted to go further. We decided to just translate all of the XML into a JSON document, then store it as it is into MongoDB. Also we want to translate that JSON back into XML. This will be implemented later, but it implies a requirement that the XML->JSON translation must clearly be lossless, so that we can later end up with the original XML document.
For a given programming language, you should be able to find a couple libraries that try to do exactly this. For Node/JavaScript I've found: X2JS by Abdulla Abdurakhmanov, xml2js node npm module by leonidas, XML-to-JSON by stsvilik, XML to JSON jquery plugin and xml2json by Stefan Gössner.
I fond that most of them approach the problem quite similarly. I chose the last one as basis for my work, as it was a simple yet robust code base.
The devil is in the details
XML and JSON do the same thing. You have a hierarchy of nodes, which ultimately contain data in leaf nodes. As such it is clearly possible to just transform from one syntax to an equivalent representation in the other syntax.
The fun part comes with the realization that XML is more convoluted than JSON. (Eliot would say it is "too expressive".) XML elements have attributes and child elements. Now, attributes too are children of their parent element, but they somehow live in their own dimension, separately from the actual child elements.
Historically, in the HTML language that is used to format text documents, this distinction did make sense. The content itself is "data" in the child elements. Things that relate to the structure or formatting of the document are "not data", so they are attributes. For example:
<p id="e12345" style="font-size: 12pt">Hello World</p>
But once we started to use XML as a generic serialization format for any programmatic data, this doesn't make any sense. Semanitcally all of the below XML snippets mean the same thing.
<item type="boolean" value="true"></item> <item type="boolean">true</item> <item><type>boolean</type><value>true</value></item>
Which is the correct one you should use?
The existence of the attributes is problematic when translating into JSON. In JSON we only have one kind of children. So how can we distinguish between an attribute-child and a proper child?
Most of the libraries I looked into take the approach of making attributes available as children where the keyname is prefixed with an "@". For example the previous XML snippets would become:
{ item : { "@type" : "boolean", "@value" : "true" } } { item : { "@type" : "boolean", "#text" : "true" } } { item : { "type" : "boolean", "value" : "true" } }
This actually works fine, because xml element names cannot contain a "@", so there's no risk for name collisions, rather attributes are easy to distinguish.
Below I will focus on improvements I made to the "state of the art". If you want to read about more details before that, this XML.com article by Stefan Gössner covers these things more broadly.
My 2 improvements
There were 2 things I had to improve on to achieve what we wanted to do.
All of the libraries I looked at, would have a similar trade-off between correctness and readability. As you can see above, an XML text node might be found under a #text
under the equivalent JSON document. This actually comes directly from the libxml parser, which returns #text
as the value of the .nodeName
property. However, for most simple documents that is not very elegant, rather a stylistically more "correct" translation is to simply put the text as a string value:
<e>Hello</e> -> { "e" : "Hello" }
Otoh, should the XML element "e" also have some attributes, this is of course not possible, so we fall back to using #text
as the key:
<e id="123">Hello</e> -> { "e" : { "@id" : "123", "#text" : "Hello" } }
The problem with this is the ambiguity. If we want to reference the string "Hello" in MongoDB, for example to retrieve it in a query, it's impossible to do if we cannot reliably know it's path. The first one can be found simply with { "e" : 1 }
while the latter would be { "e.#text" : 1 }
.
This is clearly a mess! In the real world this is gonna be a problem. Maybe the id
attribute is optional, so some documents have it and some others don't. Or maybe it was introduced in a later version of the application, so that old documents don't have an id
attribute but newer ones do. Such ambiguity would make our MongoDB database unusable.
To solve this problem I modified the xml2json translation so that attributes are output as siblings to their "parent" node:
<e id="123">Hello</e> -> { "e@id" : "123", "e" : "Hello" } <e>Hello</e> -> { "e" : "Hello" } <e id="123"></e> -> { "e@id" : "123", "e" : null }
My second change deals with ordering. In XML the children are ordered:
<html> <p>First paragraph.</p> <p>Second paragpraph.</p> </html>
(Note that attributes, otoh, are not ordered.)
In JSON the keys in a document are not in any particular order and not even guaranteed to maintain the order you think they have. The only structure in JSON that maintains order is the list structure. So to translate the above XML into JSON, we'd need a list. Interestingly, the popular way to do this would render the following translation:
{ html : { p : ["First paragraph", "Second paragraph"] } }
That looks good. Except it's a fairly limited solution! For example the following caused problems for the libraries I looked at:
<p>I feel <strong>fine</strong> today!</p>
The Node/NPM library would happily translate this as:
{ p : { strong : "fine", "#text" : "I feel today!" } }
Stefan Grössner's library at least acted robustly and detected such "mixed" children, and left them untranslated:
{ p : "I feel <strong>fine</strong> today!" }
Neither of the above were satisfactory for my purposes. The solution is to employ JSON list structures when needed:
{ p : ["I feel ", { strong : "fine" }, " today!" ] }
Note that again we maintain a style vs correctness tradeoff. Correct would be to always use a JSON list to always keep ordering of child elements intact. But in most cases the application probably doesn't care about the ordering, so we assume that it is ok to only use lists when it is actually needed. Most of the time the reason is that you have 2 children with the same nodeName
, such as in this case there were 2 #text
elements, or above we also had 2 p
elements.
It's on Github!
I published my improved xml2json library on Github. It's designed to work in modern browsers, and uses the DOMParser
for the XML, but I did successfully use it in Node.js together with the libxml
parser. For Node.js you need to change the code a little bit to integrate with the module.exports
system, and the libxml
parser instead of window.DOMParser
, other than that it works as is. I'll try to integrate those changes to the Github code one day too.
(Note that I did not yet do corresponding changes to json2xml
, so you can't actually translate backwards and get your original XML document.)
- Log in to post comments
- 43715 views
Encoding scheme increases json parser complexity
One drawback of the "sometimes use an array" approach is that parsers need to be able to handle both. If there is only one element in the set, it gets represented one way, and with two or more elements in the response set, the data types change from scalers to arrays.
EJ, that's correct. Or to be
EJ, that's correct. Or to be precise, for json2xml conversion we would in any case want to support (almost) all possible json documents. But it did increase the complexity of the output phase of xml2json.
Still, it's not that bad and well worth the effort. For example, a MongoDB multi-key index is really cool, but it will only work with a single list, you cannot use it if you have multiple nested lists. So avoiding the list structure when not necessary is both useful and good style.
JSON to XML is quite subtle
You may be interested in this presentation I gave on the subject. Its a very subtle and difficult task to convert JSON and XML and make everyone on both sides happy.
http://www.balisage.net/Proceedings/vol7/html/Lee01/BalisageVol7-Lee01…
That's an even more extensive
That's an even more extensive treatise on the topic!
Even then, it confirms how different these two worlds are. Even if you master the subtleties well, it comes across as you wrote JXON from an XML background. For example requiring a schema to get started seems odd from a JSON point of view. In JSON, having a schema is like a best case, most of the time you don't.
another approach
Thank you for your article. Now 3+ yrs later it is still a matter, what a shame. I would be happy if MongoDB would handle this issue on their side. My objective is connecting MongoDB to application which is naturally working with XML and cannot handle JSON. There are other NoSQL engines dealing with XML, however, I want to use MongoDB by all means.
Regarding your article - I am really missing the point of handling the types, as of having 42 transforming into JSON, may result in either { phone: 42 } or { phone: "42" }. Was the issue of dealing with types not a matter in you case?
Let me expose my point of view on the XML<->BSON issue. Actually - BSON, because we want to transform lossless between the two. "BSON extends the JSON model to provide additional data types, ordered fields..." [https://www.mongodb.com/json-and-bson] according to MongoDB note, so we want to handle those aspects in resulting XML.
In our situation, the BSON *is* the origin we want to get transformed to XML (and back of course), so we can focus on the XML to only support what we need in BSON, nothing more. In BSON, there is no idea of "attributes" (what we have in XML) - like you already mentioned in your post - so we dont ever need to handle them in XML, otoh, we have to handle the types supported by BSON in our XML:
{
name : "John",
age : 43,
address : {
street : "Street",
contact : "Bar"
}
float : 2.14
}
to XML
John
43
Street
<Foo>Bar
2.14
This is the way I would de/serialize MongoDB data (using StAX in Java). bson:type could also be replaced by java-type or xsd-type if you like. There is no one *the perfect* solution imho, I just wanted to show up another approach on handling this issue.
the serialized XML
<doc>
<name>John</name>
<age bson:type="bson.integer">43</age>
<address>
<street>Street</street>
<contact><Foo/>Bar</contact>
</address>
<float bson:type="bson.float">2.14</float>
</doc>
Hi Nadivane Thanks for your
Hi Nadivane
Thanks for your comment!
Indeed, the situation has not changed a lot. In fact I think what has changed that this question doesn't come up very often anymore, so there was never any progress. Increasingly the architecture is to have some kind of HTTP server (or other middleware) in front of your MongoDB, and if you insert XML data, then it is natural to convert it there. So apart from the case where I developed this prototype, there was never really any demand to strengthen the support for XML. (And by demand, I mean enough that someone would have been willing to pay for it.)
As for your question, yes, the solution is not perfect, as it loses type information in the direction of BSON->XML. In the case I was working on when developing this, the goal was to develop a prototype to demonstrate back-to-back transformation of XML->BSON->XML. It also happened that we chose Node.js as the platform, so therefore JSON comes kind of naturally, since the MongoDB client in Node will take JavaScript objects natively. In some other language, as you say, you could just go directly XML->BSON and back.
Since the starting point in my case was XML, this solution actually didn't lose type information, because there are no types in XML. But if one would have access to the corresponding XML Schema, then it would indeed be appropriate to get and use the correct types. (And it wouldn't be difficult, I just didn't have more time to work on this.)
It seems in your case you're more interested in JSON->XML->JSON, or indeed BSON->XML->BSON. This library can be used for that, but without an XML schema, the original JSON/BSON types are lost on the way back. I've seen various ways to encode or hint the type of the XML data, and didn't quite like any of them. An attribute like you propose is perhaps the one that makes most sense though.
Your proposed solution is very similar to IBM's JSONx schema: https://www.ibm.com/support/knowledgecenter/SS9H2Y_7.5.0/com.ibm.dp.doc…