- Jul 08, 2018
-
-
Eric Biggers authored
Many shash algorithms set .cra_flags = CRYPTO_ALG_TYPE_SHASH. But this is redundant with the C structure type ('struct shash_alg'), and crypto_register_shash() already sets the type flag automatically, clearing any type flag that was already there. Apparently the useless assignment has just been copy+pasted around. So, remove the useless assignment from all the shash algorithms. This patch shouldn't change any actual behavior. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
sha512-generic and sha384-generic had a cra_priority of 0, so it wasn't possible to have a lower priority SHA-512 or SHA-384 implementation, as is desired for sha512_mb which is only useful under certain workloads and is otherwise extremely slow. Change them to priority 100, which is the priority used for many of the other generic algorithms. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
sha256-generic and sha224-generic had a cra_priority of 0, so it wasn't possible to have a lower priority SHA-256 or SHA-224 implementation, as is desired for sha256_mb which is only useful under certain workloads and is otherwise extremely slow. Change them to priority 100, which is the priority used for many of the other generic algorithms. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
sha1-generic had a cra_priority of 0, so it wasn't possible to have a lower priority SHA-1 implementation, as is desired for sha1_mb which is only useful under certain workloads and is otherwise extremely slow. Change it to priority 100, which is the priority used for many of the other generic algorithms. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Stephan Mueller authored
According to SP800-56A section 5.6.2.1, the public key to be processed for the DH operation shall be checked for appropriateness. The check shall covers the full verification test in case the domain parameter Q is provided as defined in SP800-56A section 5.6.2.3.1. If Q is not provided, the partial check according to SP800-56A section 5.6.2.3.2 is performed. The full verification test requires the presence of the domain parameter Q. Thus, the patch adds the support to handle Q. It is permissible to not provide the Q value as part of the domain parameters. This implies that the interface is still backwards-compatible where so far only P and G are to be provided. However, if Q is provided, it is imported. Without the test, the NIST ACVP testing fails. After adding this check, the NIST ACVP testing passes. Testing without providing the Q domain parameter has been performed to verify the interface has not changed. Signed-off-by:
Stephan Mueller <smueller@chronox.de> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Stafford Horne authored
As of GCC 9.0.0 the build is reporting warnings like: crypto/ablkcipher.c: In function ‘crypto_ablkcipher_report’: crypto/ablkcipher.c:374:2: warning: ‘strncpy’ specified bound 64 equals destination size [-Wstringop-truncation] strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<default>", ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ sizeof(rblkcipher.geniv)); ~~~~~~~~~~~~~~~~~~~~~~~~~ This means the strnycpy might create a non null terminated string. Fix this by explicitly performing '\0' termination. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Nick Desaulniers <nick.desaulniers@gmail.com> Signed-off-by:
Stafford Horne <shorne@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Stephan Mueller authored
According to SP800-56A section 5.6.2.1, the public key to be processed for the ECDH operation shall be checked for appropriateness. When the public key is considered to be an ephemeral key, the partial validation test as defined in SP800-56A section 5.6.2.3.4 can be applied. The partial verification test requires the presence of the field elements of a and b. For the implemented NIST curves, b is defined in FIPS 186-4 appendix D.1.2. The element a is implicitly given with the Weierstrass equation given in D.1.2 where a = p - 3. Without the test, the NIST ACVP testing fails. After adding this check, the NIST ACVP testing passes. Signed-off-by:
Stephan Mueller <smueller@chronox.de> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Jul 01, 2018
-
-
Denis Efremov authored
The function skcipher_walk_next declared as static and marked as EXPORT_SYMBOL_GPL. It's a bit confusing for internal function to be exported. The area of visibility for such function is its .c file and all other modules. Other *.c files of the same module can't use it, despite all other modules can. Relying on the fact that this is the internal function and it's not a crucial part of the API, the patch just removes the EXPORT_SYMBOL_GPL marking of skcipher_walk_next. Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by:
Denis Efremov <efremov@linux.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Remove the original version of the VMAC template that had the nonce hardcoded to 0 and produced a digest with the wrong endianness. I'm unsure whether this had users or not (there are no explicit in-kernel references to it), but given that the hardcoded nonce made it wildly insecure unless a unique key was used for each message, let's try removing it and see if anyone complains. Leave the new "vmac64" template that requires the nonce to be explicitly specified as the first 16 bytes of data and uses the correct endianness for the digest. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Currently the VMAC template uses a "nonce" hardcoded to 0, which makes it insecure unless a unique key is set for every message. Also, the endianness of the final digest is wrong: the implementation uses little endian, but the VMAC specification has it as big endian, as do other VMAC implementations such as the one in Crypto++. Add a new VMAC template where the nonce is passed as the first 16 bytes of data (similar to what is done for Poly1305's nonce), and the digest is big endian. Call it "vmac64", since the old name of simply "vmac" didn't clarify whether the implementation is of VMAC-64 or of VMAC-128 (which produce 64-bit and 128-bit digests respectively); so we fix the naming ambiguity too. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
syzbot reported a crash in vmac_final() when multiple threads concurrently use the same "vmac(aes)" transform through AF_ALG. The bug is pretty fundamental: the VMAC template doesn't separate per-request state from per-tfm (per-key) state like the other hash algorithms do, but rather stores it all in the tfm context. That's wrong. Also, vmac_final() incorrectly zeroes most of the state including the derived keys and cached pseudorandom pad. Therefore, only the first VMAC invocation with a given key calculates the correct digest. Fix these bugs by splitting the per-tfm state from the per-request state and using the proper init/update/final sequencing for requests. Reproducer for the crash: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { int fd; struct sockaddr_alg addr = { .salg_type = "hash", .salg_name = "vmac(aes)", }; char buf[256] = { 0 }; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16); fork(); fd = accept(fd, NULL, NULL); for (;;) write(fd, buf, 256); } The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds VMAC_NHBYTES, causing vmac_final() to memset() a negative length. Reported-by:
<syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com> Fixes: f1939f7c ("crypto: vmac - New hash algorithm for intel_txt support") Cc: <stable@vger.kernel.org> # v2.6.32+ Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The VMAC template assumes the block cipher has a 128-bit block size, but it failed to check for that. Thus it was possible to instantiate it using a 64-bit block size cipher, e.g. "vmac(cast5)", causing uninitialized memory to be used. Add the needed check when instantiating the template. Fixes: f1939f7c ("crypto: vmac - New hash algorithm for intel_txt support") Cc: <stable@vger.kernel.org> # v2.6.32+ Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Jun 22, 2018
-
-
Colin Ian King authored
Trival fix to correct the indentation of a single statement Signed-off-by:
Colin Ian King <colin.king@canonical.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
This patch adds the sha384 pre-computed 0-length hash so that device drivers can use it when an hardware engine does not support computing a hash from a 0 length input. Signed-off-by:
Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
This patch adds the sha512 pre-computed 0-length hash so that device drivers can use it when an hardware engine does not support computing a hash from a 0 length input. Signed-off-by:
Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Jun 15, 2018
-
-
Mauro Carvalho Chehab authored
As we move stuff around, some doc references are broken. Fix some of them via this script: ./scripts/documentation-file-ref-check --fix Manually checked if the produced result is valid, removing a few false-positives. Acked-by:
Takashi Iwai <tiwai@suse.de> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Acked-by:
Stephen Boyd <sboyd@kernel.org> Acked-by:
Charles Keepax <ckeepax@opensource.wolfsonmicro.com> Acked-by:
Mathieu Poirier <mathieu.poirier@linaro.org> Reviewed-by:
Coly Li <colyli@suse.de> Signed-off-by:
Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Acked-by:
Jonathan Corbet <corbet@lwn.net>
-
- Jun 12, 2018
-
-
Kees Cook authored
The sock_kmalloc() function has no 2-factor argument form, so multiplication factors need to be wrapped in array_size(). This patch replaces cases of: sock_kmalloc(handle, a * b, gfp) with: sock_kmalloc(handle, array_size(a, b), gfp) as well as handling cases of: sock_kmalloc(handle, a * b * c, gfp) with: sock_kmalloc(handle, array3_size(a, b, c), gfp) This does, however, attempt to ignore constant size factors like: sock_kmalloc(handle, 4 * 1024, gfp) though any constants defined via macros get caught up in the conversion. Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant. The Coccinelle script used for this was: // Fix redundant parens around sizeof(). @@ expression HANDLE; type TYPE; expression THING, E; @@ ( sock_kmalloc(HANDLE, - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | sock_kmalloc(HANDLE, - (sizeof(THING)) * E + sizeof(THING) * E , ...) ) // Drop single-byte sizes and redundant parens. @@ expression HANDLE; expression COUNT; typedef u8; typedef __u8; @@ ( sock_kmalloc(HANDLE, - sizeof(u8) * (COUNT) + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(__u8) * (COUNT) + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(char) * (COUNT) + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(unsigned char) * (COUNT) + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(u8) * COUNT + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(__u8) * COUNT + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(char) * COUNT + COUNT , ...) | sock_kmalloc(HANDLE, - sizeof(unsigned char) * COUNT + COUNT , ...) ) // 2-factor product with sizeof(type/expression) and identifier or constant. @@ expression HANDLE; type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@ ( sock_kmalloc(HANDLE, - sizeof(TYPE) * (COUNT_ID) + array_size(COUNT_ID, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * COUNT_ID + array_size(COUNT_ID, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * (COUNT_CONST) + array_size(COUNT_CONST, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * COUNT_CONST + array_size(COUNT_CONST, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * (COUNT_ID) + array_size(COUNT_ID, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * COUNT_ID + array_size(COUNT_ID, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * (COUNT_CONST) + array_size(COUNT_CONST, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * COUNT_CONST + array_size(COUNT_CONST, sizeof(THING)) , ...) ) // 2-factor product, only identifiers. @@ expression HANDLE; identifier SIZE, COUNT; @@ sock_kmalloc(HANDLE, - SIZE * COUNT + array_size(COUNT, SIZE) , ...) // 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression HANDLE; expression THING; identifier STRIDE, COUNT; type TYPE; @@ ( sock_kmalloc(HANDLE, - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) ) // 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression HANDLE; expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@ ( sock_kmalloc(HANDLE, - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | sock_kmalloc(HANDLE, - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | sock_kmalloc(HANDLE, - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) ) // 3-factor product, only identifiers, with redundant parens removed. @@ expression HANDLE; identifier STRIDE, SIZE, COUNT; @@ ( sock_kmalloc(HANDLE, - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | sock_kmalloc(HANDLE, - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) ) // Any remaining multi-factor products, first at least 3-factor products // when they're not all constants... @@ expression HANDLE; expression E1, E2, E3; constant C1, C2, C3; @@ ( sock_kmalloc(HANDLE, C1 * C2 * C3, ...) | sock_kmalloc(HANDLE, - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) ) // And then all remaining 2 factors products when they're not all constants. @@ expression HANDLE; expression E1, E2; constant C1, C2; @@ ( sock_kmalloc(HANDLE, C1 * C2, ...) | sock_kmalloc(HANDLE, - E1 * E2 + array_size(E1, E2) , ...) ) Signed-off-by:
Kees Cook <keescook@chromium.org>
-
Kees Cook authored
The kmalloc() function has a 2-factor argument form, kmalloc_array(). This patch replaces cases of: kmalloc(a * b, gfp) with: kmalloc_array(a * b, gfp) as well as handling cases of: kmalloc(a * b * c, gfp) with: kmalloc(array3_size(a, b, c), gfp) as it's slightly less ugly than: kmalloc_array(array_size(a, b), c, gfp) This does, however, attempt to ignore constant size factors like: kmalloc(4 * 1024, gfp) though any constants defined via macros get caught up in the conversion. Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant. The tools/ directory was manually excluded, since it has its own implementation of kmalloc(). The Coccinelle script used for this was: // Fix redundant parens around sizeof(). @@ type TYPE; expression THING, E; @@ ( kmalloc( - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | kmalloc( - (sizeof(THING)) * E + sizeof(THING) * E , ...) ) // Drop single-byte sizes and redundant parens. @@ expression COUNT; typedef u8; typedef __u8; @@ ( kmalloc( - sizeof(u8) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(__u8) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(char) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(unsigned char) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(u8) * COUNT + COUNT , ...) | kmalloc( - sizeof(__u8) * COUNT + COUNT , ...) | kmalloc( - sizeof(char) * COUNT + COUNT , ...) | kmalloc( - sizeof(unsigned char) * COUNT + COUNT , ...) ) // 2-factor product with sizeof(type/expression) and identifier or constant. @@ type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@ ( - kmalloc + kmalloc_array ( - sizeof(TYPE) * (COUNT_ID) + COUNT_ID, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * COUNT_ID + COUNT_ID, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * (COUNT_CONST) + COUNT_CONST, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * COUNT_CONST + COUNT_CONST, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (COUNT_ID) + COUNT_ID, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * COUNT_ID + COUNT_ID, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (COUNT_CONST) + COUNT_CONST, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * COUNT_CONST + COUNT_CONST, sizeof(THING) , ...) ) // 2-factor product, only identifiers. @@ identifier SIZE, COUNT; @@ - kmalloc + kmalloc_array ( - SIZE * COUNT + COUNT, SIZE , ...) // 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression THING; identifier STRIDE, COUNT; type TYPE; @@ ( kmalloc( - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) ) // 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@ ( kmalloc( - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kmalloc( - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kmalloc( - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) ) // 3-factor product, only identifiers, with redundant parens removed. @@ identifier STRIDE, SIZE, COUNT; @@ ( kmalloc( - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) ) // Any remaining multi-factor products, first at least 3-factor products, // when they're not all constants... @@ expression E1, E2, E3; constant C1, C2, C3; @@ ( kmalloc(C1 * C2 * C3, ...) | kmalloc( - (E1) * E2 * E3 + array3_size(E1, E2, E3) , ...) | kmalloc( - (E1) * (E2) * E3 + array3_size(E1, E2, E3) , ...) | kmalloc( - (E1) * (E2) * (E3) + array3_size(E1, E2, E3) , ...) | kmalloc( - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) ) // And then all remaining 2 factors products when they're not all constants, // keeping sizeof() as the second factor argument. @@ expression THING, E1, E2; type TYPE; constant C1, C2, C3; @@ ( kmalloc(sizeof(THING) * C2, ...) | kmalloc(sizeof(TYPE) * C2, ...) | kmalloc(C1 * C2 * C3, ...) | kmalloc(C1 * C2, ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * (E2) + E2, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * E2 + E2, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (E2) + E2, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * E2 + E2, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - (E1) * E2 + E1, E2 , ...) | - kmalloc + kmalloc_array ( - (E1) * (E2) + E1, E2 , ...) | - kmalloc + kmalloc_array ( - E1 * E2 + E1, E2 , ...) ) Signed-off-by:
Kees Cook <keescook@chromium.org>
-
- Jun 06, 2018
-
-
Kees Cook authored
Replaces open-coded struct size calculations with struct_size() for devm_*, f2fs_*, and sock_* allocations. Automatically generated (and manually adjusted) from the following Coccinelle script: // Direct reference to struct field. @@ identifier alloc =~ "devm_kmalloc|devm_kzalloc|sock_kmalloc|f2fs_kmalloc|f2fs_kzalloc"; expression HANDLE; expression GFP; identifier VAR, ELEMENT; expression COUNT; @@ - alloc(HANDLE, sizeof(*VAR) + COUNT * sizeof(*VAR->ELEMENT), GFP) + alloc(HANDLE, struct_size(VAR, ELEMENT, COUNT), GFP) // mr = kzalloc(sizeof(*mr) + m * sizeof(mr->map[0]), GFP_KERNEL); @@ identifier alloc =~ "devm_kmalloc|devm_kzalloc|sock_kmalloc|f2fs_kmalloc|f2fs_kzalloc"; expression HANDLE; expression GFP; identifier VAR, ELEMENT; expression COUNT; @@ - alloc(HANDLE, sizeof(*VAR) + COUNT * sizeof(VAR->ELEMENT[0]), GFP) + alloc(HANDLE, struct_size(VAR, ELEMENT, COUNT), GFP) // Same pattern, but can't trivially locate the trailing element name, // or variable name. @@ identifier alloc =~ "devm_kmalloc|devm_kzalloc|sock_kmalloc|f2fs_kmalloc|f2fs_kzalloc"; expression HANDLE; expression GFP; expression SOMETHING, COUNT, ELEMENT; @@ - alloc(HANDLE, sizeof(SOMETHING) + COUNT * sizeof(ELEMENT), GFP) + alloc(HANDLE, CHECKME_struct_size(&SOMETHING, ELEMENT, COUNT), GFP) Signed-off-by:
Kees Cook <keescook@chromium.org>
-
- May 30, 2018
-
-
Eric Biggers authored
This reverts commit eb772f37, as now the x86 Salsa20 implementation has been removed and the generic helpers are no longer needed outside of salsa20_generic.c. We could keep this just in case someone else wants to add a new optimized Salsa20 implementation. But given that we have ChaCha20 now too, I think it's unlikely. And this can always be reverted back. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The x86 assembly implementations of Salsa20 use the frame base pointer register (%ebp or %rbp), which breaks frame pointer convention and breaks stack traces when unwinding from an interrupt in the crypto code. Recent (v4.10+) kernels will warn about this, e.g. WARNING: kernel stack regs at 00000000a8291e69 in syzkaller047086:4677 has bad 'bp' value 000000001077994c [...] But after looking into it, I believe there's very little reason to still retain the x86 Salsa20 code. First, these are *not* vectorized (SSE2/SSSE3/AVX2) implementations, which would be needed to get anywhere close to the best Salsa20 performance on any remotely modern x86 processor; they're just regular x86 assembly. Second, it's still unclear that anyone is actually using the kernel's Salsa20 at all, especially given that now ChaCha20 is supported too, and with much more efficient SSSE3 and AVX2 implementations. Finally, in benchmarks I did on both Intel and AMD processors with both gcc 8.1.0 and gcc 4.9.4, the x86_64 salsa20-asm is actually slightly *slower* than salsa20-generic (~3% slower on Skylake, ~10% slower on Zen), while the i686 salsa20-asm is only slightly faster than salsa20-generic (~15% faster on Skylake, ~20% faster on Zen). The gcc version made little difference. So, the x86_64 salsa20-asm is pretty clearly useless. That leaves just the i686 salsa20-asm, which based on my tests provides a 15-20% speed boost. But that's without updating the code to not use %ebp. And given the maintenance cost, the small speed difference vs. salsa20-generic, the fact that few people still use i686 kernels, the doubt that anyone is even using the kernel's Salsa20 at all, and the fact that a SSE2 implementation would almost certainly be much faster on any remotely modern x86 processor yet no one has cared enough to add one yet, I don't think it's worthwhile to keep. Thus, just remove both the x86_64 and i686 salsa20-asm implementations. Reported-by:
<syzbot+ffa3a158337bbc01ff09@syzkaller.appspotmail.com> Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
Commit 56e8e57f ("crypto: morus - Add common SIMD glue code for MORUS") accidetally consiedered the glue code to be usable by different architectures, but it seems to be only usable on x86. This patch moves it under arch/x86/crypto and adds 'depends on X86' to the Kconfig options and also removes the prompt to hide these internal options from the user. Reported-by:
kbuild test robot <lkp@intel.com> Signed-off-by:
Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Currently testmgr has separate encryption and decryption test vectors for symmetric ciphers. That's massively redundant, since with few exceptions (mostly mistakes, apparently), all decryption tests are identical to the encryption tests, just with the input/result flipped. Therefore, eliminate the redundancy by removing the decryption test vectors and updating testmgr to test both encryption and decryption using what used to be the encryption test vectors. Naming is adjusted accordingly: each cipher_testvec now has a 'ptext' (plaintext), 'ctext' (ciphertext), and 'len' instead of an 'input', 'result', 'ilen', and 'rlen'. Note that it was always the case that 'ilen == rlen'. AES keywrap ("kw(aes)") is special because its IV is generated by the encryption. Previously this was handled by specifying 'iv_out' for encryption and 'iv' for decryption. To make it work cleanly with only one set of test vectors, put the IV in 'iv', remove 'iv_out', and add a boolean that indicates that the IV is generated by the encryption. In total, this removes over 10000 lines from testmgr.h, with no reduction in test coverage since prior patches already copied the few unique decryption test vectors into the encryption test vectors. This covers all algorithms that used 'struct cipher_testvec', e.g. any block cipher in the ECB, CBC, CTR, XTS, LRW, CTS-CBC, PCBC, OFB, or keywrap modes, and Salsa20 and ChaCha20. No change is made to AEAD tests, though we probably can eliminate a similar redundancy there too. The testmgr.h portion of this patch was automatically generated using the following awk script, with some slight manual fixups on top (updated 'struct cipher_testvec' definition, updated a few comments, and fixed up the AES keywrap test vectors): BEGIN { OTHER = 0; ENCVEC = 1; DECVEC = 2; DECVEC_TAIL = 3; mode = OTHER } /^static const struct cipher_testvec.*_enc_/ { sub("_enc", ""); mode = ENCVEC } /^static const struct cipher_testvec.*_dec_/ { mode = DECVEC } mode == ENCVEC && !/\.ilen[[:space:]]*=/ { sub(/\.input[[:space:]]*=$/, ".ptext =") sub(/\.input[[:space:]]*=/, ".ptext\t=") sub(/\.result[[:space:]]*=$/, ".ctext =") sub(/\.result[[:space:]]*=/, ".ctext\t=") sub(/\.rlen[[:space:]]*=/, ".len\t=") print } mode == DECVEC_TAIL && /[^[:space:]]/ { mode = OTHER } mode == OTHER { print } mode == ENCVEC && /^};/ { mode = OTHER } mode == DECVEC && /^};/ { mode = DECVEC_TAIL } Note that git's default diff algorithm gets confused by the testmgr.h portion of this patch, and reports too many lines added and removed. It's better viewed with 'git diff --minimal' (or 'git show --minimal'), which reports "2 files changed, 919 insertions(+), 11723 deletions(-)". Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
One "kw(aes)" decryption test vector doesn't exactly match an encryption test vector with input and result swapped. In preparation for removing the decryption test vectors, add this test vector to the encryption test vectors, so we don't lose any test coverage. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
None of the four "ecb(tnepres)" decryption test vectors exactly match an encryption test vector with input and result swapped. In preparation for removing the decryption test vectors, add these to the encryption test vectors, so we don't lose any test coverage. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
One "cbc(des)" decryption test vector doesn't exactly match an encryption test vector with input and result swapped. It's *almost* the same as one, but the decryption version is "chunked" while the encryption version is "unchunked". In preparation for removing the decryption test vectors, make the encryption one both chunked and unchunked, so we don't lose any test coverage. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Two "ecb(des)" decryption test vectors don't exactly match any of the encryption test vectors with input and result swapped. In preparation for removing the decryption test vectors, add these to the encryption test vectors, so we don't lose any test coverage. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- May 26, 2018
-
-
Eric Biggers authored
crc32c has an unkeyed test vector but crc32 did not. Add the crc32c one (which uses an empty input) to crc32 too, and also add a new one to both that uses a nonempty input. These test vectors verify that crc32 and crc32c implementations use the correct default initial state. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Since testmgr uses a single tfm for all tests of each hash algorithm, once a key is set the tfm won't be unkeyed anymore. But with crc32 and crc32c, the key is really the "default initial state" and is optional; those algorithms should have both keyed and unkeyed test vectors, to verify that implementations use the correct default key. Simply listing the unkeyed test vectors first isn't guaranteed to work yet because testmgr makes multiple passes through the test vectors. crc32c does have an unkeyed test vector listed first currently, but it only works by chance because the last crc32c test vector happens to use a key that is the same as the default key. Therefore, teach testmgr to split hash test vectors into unkeyed and keyed sections, and do all the unkeyed ones before the keyed ones. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The Blackfin CRC driver was removed by commit 9678a8dc ("crypto: bfin_crc - remove blackfin CRC driver"), but it was forgotten to remove the corresponding "hmac(crc32)" test vectors. I see no point in keeping them since nothing else appears to implement or use "hmac(crc32)", which isn't an algorithm that makes sense anyway because HMAC is meant to be used with a cryptographically secure hash function, which CRC's are not. Thus, remove the unneeded test vectors. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The __crc32_le() wrapper function is pointless. Just call crc32_le() directly instead. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
crc32c-generic sets an alignmask, but actually its ->update() works with any alignment; only its ->setkey() and outputting the final digest assume an alignment. To prevent the buffer from having to be aligned by the crypto API for just these cases, switch these cases over to the unaligned access macros and remove the cra_alignmask. Note that this also makes crc32c-generic more consistent with crc32-generic. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
crc32-generic doesn't have a cra_alignmask set, which is desired as its ->update() works with any alignment. However, it incorrectly assumes 4-byte alignment in ->setkey() and when outputting the final digest. Fix this by using the unaligned access macros in those cases. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Christoph Hellwig authored
Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Now that sock_poll handles a NULL ->poll or ->poll_mask there is no need for a stub. Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- May 18, 2018
-
-
Ondrej Mosnacek authored
This patch adds optimized implementations of MORUS-640 and MORUS-1280, utilizing the SSE2 and AVX2 x86 extensions. For MORUS-1280 (which operates on 256-bit blocks) we provide both AVX2 and SSE2 implementation. Although SSE2 MORUS-1280 is slower than AVX2 MORUS-1280, it is comparable in speed to the SSE2 MORUS-640. Signed-off-by:
Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
This patch adds a common glue code for optimized implementations of MORUS AEAD algorithms. Signed-off-by:
Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
This patch adds test vectors for MORUS-640 and MORUS-1280. The test vectors were generated using the reference implementation from SUPERCOP (see code comments for more details). Signed-off-by:
Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
This patch adds the generic implementation of the MORUS family of AEAD algorithms (MORUS-640 and MORUS-1280). The original authors of MORUS are Hongjun Wu and Tao Huang. At the time of writing, MORUS is one of the finalists in CAESAR, an open competition intended to select a portfolio of alternatives to the problematic AES-GCM: https://competitions.cr.yp.to/caesar-submissions.html https://competitions.cr.yp.to/round3/morusv2.pdf Signed-off-by:
Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
This patch adds optimized implementations of AEGIS-128, AEGIS-128L, and AEGIS-256, utilizing the AES-NI and SSE2 x86 extensions. Signed-off-by:
Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-