* Test equality of different length strings.
* Add tab to json_pack whitespace test.
* Test json_sprintf with empty result and invalid UTF.
* Test json_get_alloc_funcs with NULL arguments.
* Test invalid arguments.
* Add test_chaos to test allocation failure code paths.
* Remove redundant json_is_string checks from json_string_equal and
json_string_copy. Both functions are static and can only be called
with a json string.
Fixes to issues found by test_chaos:
* Fix crash on OOM in pack_unpack.c:read_string().
* Unconditionally free string in string_create upon allocation failure.
Update load.c:parse_value() to reflect this. This resolves a leak on
allocation failure for pack_unpack.c:pack_string() and
value.c:json_sprintf().
Although not visible from CodeCoverage these changes significantly
increase branch coverage. Especially in src/value.c where we previously
covered 67.4% of branches and now cover 96.3% of branches.
Users of the "o" format have an expectation that the object reference
will be stolen. Any error causes the collection process to end early.
This patch causes json_pack and related functions to continue scanning
the format and parameters so all references can be stolen to prevent
leaks. This makes no attempt to continue processing if the format
string is broken or missing.
'make check' still passes. Ran test_pack under valgrind and verified
that the leaked reference is fixed. Added a test which uses refcounts
to verify that the reference was correctly stolen after a NULL value
error.
Issue #135
The JSON_EMBED encoding flag causes the opening and closing characters
of the top-level array ('[', ']') or object ('{', '}') to be omitted
during encoding. This feature makes it possible to concatenate multiple
arrays or objects in the stream output. It also makes it possible to
perform outputs of partial composes.
One such example of a partial compose is when outputting a JWE object.
The output is a JSON object. But it has one top-level attribute
("ciphertext") that can grow out of proportion with the rest of the
metadata. With the JSON_EMBED flag, the other metadata can be composed
ahead of time and dumped during the beginning of output, where the
"ciphertext" and "tag" attributes can be streamed out in chunks. Thus,
the header material can be composed with Jansson and the ciphertext
itself can be composed manually.
This function encodes the json_t object to a pre-allocated buffer.
It compliments the already existing json_loadb() function and is
useful for parsing JSON-RPC (among other protocols) when sent over
datagram sockets.
Signed-off-by: Nathaniel McCallum <npmccallum@redhat.com>
Otherwise figuring out what's wrong with your JSON can be tricky,
especially if you're using a single fmt string to validate a large,
complicated schema.
The comma delimiting will make separating keys that contain commas
difficult. For example:
{"foo, bar": true, "baz": false}
will generate errors like:
2 object item(s) left unpacked: foo, bar, baz
but that seems like a small enough corner case to not be worth much
worrying.
I wanted to find a way to handle this without have_unrecognized_keys,
but the strbuffer tooling makes it look like I shouldn't be reaching
in to do things like:
strbuffer_t unrecognized_keys;
unrecognized_keys.value = NULL;
and then using 'unrecognized_keys.value == NULL' in place of
have_unrecognized_keys.
This is particularly useful in modular situations where the allocation
functions are either unknown or private. For instance, in such cases,
the caller of json_dumps() has no way to free the returned buffer.
This has the consequence that numbers are never converted to integers
when JSON_DECODE_INT_AS_REAL is set, and thus it works correctly all
integers that are representable as double.
Fixes#212.
On unpack, one may want to mix `JSON_STRICT` and optional keys by using
a format like `{s:i,s?o!}`. Unfortunately, this fails the stric test
with `-1 object item(s) left unpacked` error when the second key is not
specified.
To fix that, we iter on each key and we check if we have successfully
unpacked them. This is less efficient than the previous method but it
brings correctness.
This is because it's really easy to get a name collission if compiling
Jansson as a subproject in a larger CMake project. If one project includes
several subprojects each having their own config.h, this will cause the
wrong file to be loaded.
Only run the imprecision part if json_int_t is long long, otherwise
the imprecision cannot be tested against json_int_t.
Also, convert the double value to json_int_t before checking the
imprecision, because otherwise the json_int_t is converted to double
and the imprecision happens in the "wrong direction".