Users of the "o" format have an expectation that the object reference
will be stolen. Any error causes the collection process to end early.
This patch causes json_pack and related functions to continue scanning
the format and parameters so all references can be stolen to prevent
leaks. This makes no attempt to continue processing if the format
string is broken or missing.
'make check' still passes. Ran test_pack under valgrind and verified
that the leaked reference is fixed. Added a test which uses refcounts
to verify that the reference was correctly stolen after a NULL value
error.
Issue #135
Make parse_object use json_object_set_new_nocheck and make parse_array
use json_array_append_new, remove json_decref from error and success
paths.
Fixes#376
The JSON_EMBED encoding flag causes the opening and closing characters
of the top-level array ('[', ']') or object ('{', '}') to be omitted
during encoding. This feature makes it possible to concatenate multiple
arrays or objects in the stream output. It also makes it possible to
perform outputs of partial composes.
One such example of a partial compose is when outputting a JWE object.
The output is a JSON object. But it has one top-level attribute
("ciphertext") that can grow out of proportion with the rest of the
metadata. With the JSON_EMBED flag, the other metadata can be composed
ahead of time and dumped during the beginning of output, where the
"ciphertext" and "tag" attributes can be streamed out in chunks. Thus,
the header material can be composed with Jansson and the ciphertext
itself can be composed manually.
This function encodes the json_t object to a pre-allocated buffer.
It compliments the already existing json_loadb() function and is
useful for parsing JSON-RPC (among other protocols) when sent over
datagram sockets.
Signed-off-by: Nathaniel McCallum <npmccallum@redhat.com>
The fix limits recursion depths when parsing arrays and objects.
The limit is configurable via the `JSON_PARSER_MAX_DEPTH` setting
within `jansson_config.h` and is set by default to 2048.
Update the RFC conformance document to note the limit; the RFC
allows limits to be set by the implementation so nothing has
actually changed w.r.t. conformance state.
Reported by Gustavo Grieco.
Otherwise figuring out what's wrong with your JSON can be tricky,
especially if you're using a single fmt string to validate a large,
complicated schema.
The comma delimiting will make separating keys that contain commas
difficult. For example:
{"foo, bar": true, "baz": false}
will generate errors like:
2 object item(s) left unpacked: foo, bar, baz
but that seems like a small enough corner case to not be worth much
worrying.
I wanted to find a way to handle this without have_unrecognized_keys,
but the strbuffer tooling makes it look like I shouldn't be reaching
in to do things like:
strbuffer_t unrecognized_keys;
unrecognized_keys.value = NULL;
and then using 'unrecognized_keys.value == NULL' in place of
have_unrecognized_keys.
This is particularly useful in modular situations where the allocation
functions are either unknown or private. For instance, in such cases,
the caller of json_dumps() has no way to free the returned buffer.
This is both good practice and nice for OpenBSD users, who will no
longer get the nag message to not use sprintf/strcpy every time they
link against jansson. It's worth noting that the existing code seems
safe to me - bounds checks were already happening before the actual
calls - and that this is for extra security.
This has the consequence that numbers are never converted to integers
when JSON_DECODE_INT_AS_REAL is set, and thus it works correctly all
integers that are representable as double.
Fixes#212.