- Aug 25, 2015
-
-
santosh.shilimkar@oracle.com authored
Connection could have been dropped while the route is being resolved so check for valid cm_id before initiating the connection. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Mukesh Kacker authored
rds_send_queue_rm() allows for the "current datagram" being queued to exceed SO_SNDBUF thresholds by checking bytes queued without counting in length of current datagram. (Since sk_sndbuf is set to twice requested SO_SNDBUF value as a kernel heuristic this is usually fine!) If this "current datagram" squeezing past the threshold is itself many times the size of the sk_sndbuf threshold itself then even twice the SO_SNDBUF does not save us and it gets queued but cannot be transmitted. Threads block and deadlock and device becomes unusable. The check for this datagram not exceeding SNDBUF thresholds (EMSGSIZE) is not done on this datagram as that check is only done if queueing attempt fails. (Datagrams that follow this datagram fail queueing attempts, go through the check and eventually trip EMSGSIZE error but zero length datagrams silently fail!) This fix moves the check for datagrams exceeding SNDBUF limits before any processing or queueing is attempted and returns EMSGSIZE early in the rds_sndmsg() code. This change also ensures that all datagrams get checked for exceeding SNDBUF/sk_sndbuf size limits and the large datagrams that exceed those limits do not get to rds_send_queue_rm() code for processing. Signed-off-by:
Mukesh Kacker <mukesh.kacker@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
rds_send_drop_to() is used during socket tear down to find all the messages on the socket and flush them . It can race with the acking code unless it takes the m_rs_lock on each and every message. This plugs a hole where we didn't take m_rs_lock on any message that didn't have the RDS_MSG_ON_CONN set. Taking m_rs_lock avoids double frees and other memory corruptions as the ack code trusts the message m_rs pointer on a socket that had actually been freed. We must take m_rs_lock to access m_rs. Because of lock nesting and rs access, we also need to acquire rs_lock. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Santosh Shilimkar authored
During connection resets, we are destroying the rdma id too soon. We can't destroy it when it is still in use. So lets move rdma_destroy_id() after we clear the rings. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
Fix the asserion level since its not fatal and can be hit in normal execution paths. There is no need to take the system down. We keep the WARN_ON() to detect the condition if we get here with bad pages. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
WR(Work Requests )always generate a WC(Work Completion) with signaled send. Default RDS ib code is setup for un-signaled completion. Since RDS connction is persistent, we can end up sending the data even after large-send when the remote end is not active(for any reason). By doing a signaled send at least once per large-send, we can at least detect the problem in work completion handler there by avoiding sending more data to inactive remote. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
rds_send_xmit() marks the rds message map flag after xmit_[rdma/atomic]() which is clearly wrong. We need to maintain the ownership between transport and rds. Also take care of error path. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
This helps to detect the accidental processes/apps trying to destroy the RDS socket which they are sharing with other processes/apps. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
Ensure we don't keep sending the data if the link is congested. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
If we get an ENOMEM during rds_ib_recv_refill, we might never come back and refill again later. Patch makes sure to kick krdsd into helping out. To achieve this we add RDS_RECV_REFILL flag and update in the refill path based on that so that at least some therad will keep posting receive buffers. Since krdsd and softirq both might race for refill, we decide to schedule on work queue based on ring_low instead of ring_empty. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
If the ip address tables hasn't changed, there is no need to remove them only to be added back again. Lets fix it. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
Destroy ib state early during shutdown. Otherwise we can get callbacks after the QP isn't really able to handle them. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
We were still seeing rare occurrences of the WARN_ON(recv->r_frag) which indicates that the recv refill path was finding allocated frags in ring entries that were marked free. These were usually followed by OOM crashes. They only seem to be occurring in the presence of completion errors and connection resets. This patch ensures that we free the frag as we mark the ring entry free. This should stop the refill path from finding allocated frags in ring entries that were marked free. Reviewed-by:
Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com> Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
santosh.shilimkar@oracle.com authored
In rds_cmsg_rdma_args() 'ret' is used by rds_pin_pages() which returns number of pinned pages on success. And the same value is returned to the caller of rds_cmsg_rdma_args() on success which is not intended. Commit f4a3fc03 ("RDS: Clean up error handling in rds_cmsg_rdma_args") removed the 'ret = 0' line which broke RDS RDMA mode. Fix it by restoring the return value on rds_pin_pages() success keeping the clean-up in place. Signed-off-by:
Santosh Shilimkar <ssantosh@kernel.org> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Aug 07, 2015
-
-
Sowmini Varadhan authored
Register pernet subsys init/stop functions that will set up and tear down per-net RDS-TCP listen endpoints. Unregister pernet subusys functions on 'modprobe -r' to clean up these end points. Enable keepalive on both accept and connect socket endpoints. The keepalive timer expiration will ensure that client socket endpoints will be removed as appropriate from the netns when an interface is removed from a namespace. Register a device notifier callback that will clean up all sockets (and thus avoid the need to wait for keepalive timeout) when the loopback device is unregistered from the netns indicating that the netns is getting deleted. Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Sowmini Varadhan authored
Open the sockets calling sock_create_kern() with the correct struct net pointer, and use that struct net pointer when verifying the address passed to rds_bind(). Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Aug 03, 2015
-
-
Dan Carpenter authored
"len" is a signed integer. We check that len is not negative, so it goes from zero to INT_MAX. PAGE_SIZE is unsigned long so the comparison is type promoted to unsigned long. ULONG_MAX - 4095 is a higher than INT_MAX so the condition can never be true. I don't know if this is harmful but it seems safe to limit "len" to INT_MAX - 4095. Fixes: a8c879a7 ('RDS: Info and stats') Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jul 14, 2015
-
-
Wengang Wang authored
Fixes: 3e0249f9 ("RDS/IB: add refcount tracking to struct rds_ib_device") There lacks a dropping on rds_ib_device.refcount in case rds_ib_alloc_fmr failed(mr pool running out). this lead to the refcount overflow. A complain in line 117(see following) is seen. From vmcore: s_ib_rdma_mr_pool_depleted is 2147485544 and rds_ibdev->refcount is -2147475448. That is the evidence the mr pool is used up. so rds_ib_alloc_fmr is very likely to return ERR_PTR(-EAGAIN). 115 void rds_ib_dev_put(struct rds_ib_device *rds_ibdev) 116 { 117 BUG_ON(atomic_read(&rds_ibdev->refcount) <= 0); 118 if (atomic_dec_and_test(&rds_ibdev->refcount)) 119 queue_work(rds_wq, &rds_ibdev->free_work); 120 } fix is to drop refcount when rds_ib_alloc_fmr failed. Signed-off-by:
Wengang Wang <wen.gang.wang@oracle.com> Reviewed-by:
Haggai Eran <haggaie@mellanox.com> Signed-off-by:
Doug Ledford <dledford@redhat.com>
-
- Jul 03, 2015
-
-
Markus Elfring authored
The module_put() function tests whether its argument is NULL and then returns immediately. Thus the test around the call is not needed. This issue was detected by using the Coccinelle software. Signed-off-by:
Markus Elfring <elfring@users.sourceforge.net> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jun 21, 2015
-
-
Fabian Frederick authored
This patch also renames sg to sglist and aligns function parameters. See Documentation/DMA-API.txt - Part Id for scatterlist details Signed-off-by:
Fabian Frederick <fabf@skynet.be> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jun 12, 2015
-
-
Matan Barak authored
Currently, ib_create_cq uses cqe and comp_vecotr instead of the extendible ib_cq_init_attr struct. Earlier patches already changed the vendors to work with ib_cq_init_attr. This patch changes the consumers too. Signed-off-by:
Matan Barak <matanb@mellanox.com> Signed-off-by:
Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by:
Doug Ledford <dledford@redhat.com>
-
- Jun 02, 2015
-
-
Wengang Wang authored
The BUG_ON at line 452/453 is triggered in function rds_send_xmit. 441 while (ret) { 442 tmp = min_t(int, ret, sg->length - 443 conn->c_xmit_data_off); 444 conn->c_xmit_data_off += tmp; 445 ret -= tmp; 446 if (conn->c_xmit_data_off == sg->length) { 447 conn->c_xmit_data_off = 0; 448 sg++; 449 conn->c_xmit_sg++; 450 if (ret != 0 && conn->c_xmit_sg == rm->data.op_nents) 451 printk(KERN_ERR "conn %p rm %p sg %p ret %d\n", conn, rm, sg, ret); 452 BUG_ON(ret != 0 && 453 conn->c_xmit_sg == rm->data.op_nents); 454 } 455 } it is complaining the total sent length is bigger that we want to send. rds_ib_xmit() is wrong for the second entry for the same rds_message returning wrong value. the sg and off passed by rds_send_xmit to rds_ib_xmit is based on scatterlist.offset/length, but the rds_ib_xmit action is based on scatterlist.dma_address/dma_length. in case dma_length is larger than length there is problem. for the 2nd and later entries of rds_ib_xmit for same rds_message, at least one of the following two is wrong: 1) the scatterlist to start with, the choosen one can far beyond the correct one. 2) the offset to start with within the scatterlist. fix: add op_dmasg and op_dmaoff to rm_data_op structure indicating the scatterlist and offset within the it to start with for rds_ib_xmit respectively. op_dmasg and op_dmaoff are initialized to zero when doing dma mapping for the first see of the message and are changed when filling send slots. the same applies to rds_iw_xmit too. Signed-off-by:
Wengang Wang <wen.gang.wang@oracle.com> Signed-off-by:
Doug Ledford <dledford@redhat.com>
-
- Jun 01, 2015
-
-
Sowmini Varadhan authored
The currently attached transport for a PF_RDS socket may be obtained from user space by invoking getsockopt(2) using the SO_RDS_TRANSPORT option at the SOL_RDS level. The integer optval returned will be one of the RDS_TRANS_* constants defined in linux/rds.h. Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Sowmini Varadhan authored
An application may deterministically attach the underlying transport for a PF_RDS socket by invoking setsockopt(2) with the SO_RDS_TRANSPORT option at the SOL_RDS level. The integer argument to setsockopt must be one of the RDS_TRANS_* transport types, e.g., RDS_TRANS_TCP. The option must be specified before invoking bind(2) on the socket, and may only be used once on the socket. An attempt to set the option on a bound socket, or to invoke the option after a successful SO_RDS_TRANSPORT attachment, will return EOPNOTSUPP. Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Sowmini Varadhan authored
User space applications that desire to explicitly select the underlying transport for a PF_RDS socket may do so by using the SO_RDS_TRANSPORT socket option at the SOL_RDS level before bind(). The integer argument provided to the socket option would be one of the RDS_TRANS_* values, e.g., RDS_TRANS_TCP. This commit exports the constant values need by such applications via <linux/rds.h> Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- May 18, 2015
-
-
Sagi Grimberg authored
Signed-off-by:
Sagi Grimberg <sagig@mellanox.com> Signed-off-by:
Doug Ledford <dledford@redhat.com>
-
- May 11, 2015
-
-
Eric W. Biederman authored
In preparation for changing how struct net is refcounted on kernel sockets pass the knowledge that we are creating a kernel socket from sock_create_kern through to sk_alloc. Signed-off-by:
"Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- May 09, 2015
-
-
Sowmini Varadhan authored
When the peer of an RDS-TCP connection restarts, a reconnect attempt should only be made from the active side of the TCP connection, i.e. the side that has a transient TCP port number. Do not add the passive side of the TCP connection to the c_hash_node and thus avoid triggering rds_queue_reconnect() for passive rds connections. Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Sowmini Varadhan authored
When running RDS over TCP, the active (client) side connects to the listening ("passive") side at the RDS_TCP_PORT. After the connection is established, if the client side reboots (potentially without even sending a FIN) the server still has a TCP socket in the esablished state. If the server-side now gets a new SYN comes from the client with a different client port, TCP will create a new socket-pair, but the RDS layer will incorrectly pull up the old rds_connection (which is still associated with the stale t_sock and RDS socket state). This patch corrects this behavior by having rds_tcp_accept_one() always create a new connection for an incoming TCP SYN. The rds and tcp state associated with the old socket-pair is cleaned up via the rds_tcp_state_change() callback which would typically be invoked in most cases when the client-TCP sends a FIN on TCP restart, triggering a transition to CLOSE_WAIT state. In the rarer event of client death without a FIN, TCP_KEEPALIVE probes on the socket will detect the stale socket, and the TCP transition to CLOSE state will trigger the RDS state cleanup. Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- May 04, 2015
-
-
David Ahern authored
c0adf54a introduced new sparse warnings: CHECK /home/dahern/kernels/linux.git/net/rds/ib_cm.c net/rds/ib_cm.c:191:34: warning: incorrect type in initializer (different base types) net/rds/ib_cm.c:191:34: expected unsigned long long [unsigned] [usertype] dp_ack_seq net/rds/ib_cm.c:191:34: got restricted __be64 <noident> net/rds/ib_cm.c:194:51: warning: cast to restricted __be64 The temporary variable for sequence number should have been declared as __be64 rather than u64. Make it so. Signed-off-by:
David Ahern <david.ahern@oracle.com> Cc: shamir rabinovitch <shamir.rabinovitch@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
shamir rabinovitch authored
rdma_conn_param private data is copied using memcpy after headers such as cma_hdr (see cma_resolve_ib_udp as example). so the start of the private data is aligned to the end of the structure that come before. if this structure end with u32 the meaning is that the start of the private data will be 4 bytes aligned. structures that use u8/u16/u32/u64 are naturally aligned but in case the structure start is not 8 bytes aligned, all u64 members of this structure will not be aligned. to solve this issue we must use special macros that allow unaligned access to those unaligned members. Addresses the following kernel log seen when attempting to use RDMA: Kernel unaligned access at TPC[10507a88] rds_ib_cm_connect_complete+0x1bc/0x1e0 [rds_rdma] Acked-by:
Chien Yen <chien.yen@oracle.com> Signed-off-by:
shamir rabinovitch <shamir.rabinovitch@oracle.com> [Minor tweaks for top of tree by:] Signed-off-by:
David Ahern <david.ahern@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Apr 08, 2015
-
-
Sowmini Varadhan authored
If a determined set of concurrent senders keep the send queue full, we can loop forever inside rds_send_xmit. This fix has two parts. First we are dropping out of the while(1) loop after we've processed a large batch of messages. Second we add a generation number that gets bumped each time the xmit bit lock is acquired. If someone else has jumped in and made progress in the queue, we skip our goto restart. Original patch by Chris Mason. Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Sowmini Varadhan authored
Passive connections were added for the case where one loopback IB connection between identical addresses needs another connection to store the second QP. Unfortunately, they were also created in the case where the addesses differ and we already have both QPs. This lead to a message reordering bug. - two different IB interfaces and addresses on a machine: A B - traffic is sent from A to B - connection from A-B is created, connect request sent - listening accepts connect request, B-A is created - traffic flows, next_rx is incremented - unacked messages exist on the retrans list - connection A-B is shut down, new connect request sent - listen sees existing loopback B-A, creates new passive B-A - retrans messages are sent and delivered because of 0 next_rx The problem is that the second connection request saw the previously existing parent connection. Instead of using it, and using the existing next_rx_seq state for the traffic between those IPs, it mistakenly thought that it had to create a passive connection. We fix this by only using passive connections in the special case where laddr and faddr match. In this case we'll only ever have one parent sending connection requests and one passive connection created as the listening path sees the existing parent connection which initiated the request. Original patch by Zach Brown Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Mar 12, 2015
-
-
Arnd Bergmann authored
The rds_iw_update_cm_id function stores a large 'struct rds_sock' object on the stack in order to pass a pair of addresses. This happens to just fit withint the 1024 byte stack size warning limit on x86, but just exceed that limit on ARM, which gives us this warning: net/rds/iw_rdma.c:200:1: warning: the frame size of 1056 bytes is larger than 1024 bytes [-Wframe-larger-than=] As the use of this large variable is basically bogus, we can rearrange the code to not do that. Instead of passing an rds socket into rds_iw_get_device, we now just pass the two addresses that we have available in rds_iw_update_cm_id, and we change rds_iw_get_mr accordingly, to create two address structures on the stack there. Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Acked-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Mar 02, 2015
-
-
Ying Xue authored
After TIPC doesn't depend on iocb argument in its internal implementations of sendmsg() and recvmsg() hooks defined in proto structure, no any user is using iocb argument in them at all now. Then we can drop the redundant iocb argument completely from kinds of implementations of both sendmsg() and recvmsg() in the entire networking stack. Cc: Christoph Hellwig <hch@lst.de> Suggested-by:
Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by:
Ying Xue <ying.xue@windriver.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Feb 11, 2015
-
-
Sowmini Varadhan authored
When the RDS transport is TCP, we cannot inline the call to rds_send_xmit from rds_cong_queue_update because (a) we are already holding the sock_lock in the recv path, and will deadlock when tcp_setsockopt/tcp_sendmsg try to get the sock lock (b) cong_queue_update does an irqsave on the rds_cong_lock, and this will trigger warnings (for a good reason) from functions called out of sock_lock. This patch reverts the change introduced by 2fa57129 ("RDS: Bypass workqueue when queueing cong updates"). The patch has been verified for both RDS/TCP as well as RDS/RDMA to ensure that there are not regressions for either transport: - for verification of RDS/TCP a client-server unit-test was used, with the server blocked in gdb and thus unable to drain its rcvbuf, eventually triggering a RDS congestion update. - for RDS/RDMA, the standard IB regression tests were used Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Feb 08, 2015
-
-
Sowmini Varadhan authored
Commit 083735f4 ("rds: switch rds_message_copy_from_user() to iov_iter") breaks rds_message_copy_from_user() semantics on success, and causes it to return nbytes copied, when it should return 0. This commit fixes that bug. Signed-off-by:
Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Rasmus Villemoes authored
The macro rdsdebug is defined as pr_debug("%s(): " fmt, __func__ , ##args) Hence it doesn't make sense to include the name of the calling function explicitly in the format string passed to rdsdebug. Signed-off-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Feb 05, 2015
-
-
Sasha Levin authored
Max unacked packets/bytes is an int while sizeof(long) was used in the sysctl table. This means that when they were getting read we'd also leak kernel memory to userspace along with the timeout values. Signed-off-by:
Sasha Levin <sasha.levin@oracle.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Dec 15, 2014
-
-
Geert Uytterhoeven authored
net/rds/message.c: In function ‘rds_message_inc_copy_to_user’: net/rds/message.c:328: warning: comparison of distinct pointer types lacks a cast Use min_t(unsigned long, ...) like is done in rds_message_copy_from_user(). Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-