Discussion:
In nfs_dir_read() count used before and after limiting?
Peter Dufault
2014-09-11 13:58:34 UTC
Permalink
In nfs_dir_read() I see:

/* align + round down the buffer */
count &= ~ (DIRENT_HEADER_SIZE - 1);
di->len = count;

Then later:
if (count > NFS_MAXDATA)
count = NFS_MAXDATA;

di->readdirargs.count = count;

Can someone who understands this comment on it?

Peter
-----------------
Peter Dufault
HD Associates, Inc. Software and System Engineering
Sebastian Huber
2014-09-11 14:03:39 UTC
Permalink
Post by Peter Dufault
/* align + round down the buffer */
count &= ~ (DIRENT_HEADER_SIZE - 1);
di->len = count;
if (count > NFS_MAXDATA)
count = NFS_MAXDATA;
di->readdirargs.count = count;
Can someone who understands this comment on it?
Sorry, Peter. I don't have time to look at this at the moment. The NFS code
is hard to understand. Since it works for me I have no urgent need to spend
time on this.
--
Sebastian Huber, embedded brains GmbH

Address : Dornierstr. 4, D-82178 Puchheim, Germany
Phone : +49 89 189 47 41-16
Fax : +49 89 189 47 41-09
E-Mail : sebastian.huber-L1vi/***@public.gmane.org
PGP : Public key available on request.

Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
Peter Dufault
2014-09-13 13:32:53 UTC
Permalink
Post by Sebastian Huber
Post by Peter Dufault
/* align + round down the buffer */
count &= ~ (DIRENT_HEADER_SIZE - 1);
di->len = count;
if (count > NFS_MAXDATA)
count = NFS_MAXDATA;
di->readdirargs.count = count;
Can someone who understands this comment on it?
Sorry, Peter. I don't have time to look at this at the moment. The NFS code
is hard to understand. Since it works for me I have no urgent need to spend
time on this.
Can you answer one question? If nfsStBlksize is set to zero so that this:

/* Set to "preferred size" of this NFS client implementation */
buf->st_blksize = nfsStBlksize ? nfsStBlksize : fa->blocksize;

returns 4096, then is it a bug if transfers larger than 4K are attempted?

Limiting all transfers to values other than the default 8K can end up with memory corruption. 512B works, 1K works, 2K and 4K corrupt things. The changes are minor (attached).

If it's a bug for transfers to be larger than 4K when fa->blocksize=4K I can look at that, that's easy to trap when it happens.

Peter
-----------------
Peter Dufault
HD Associates, Inc. Software and System Engineering
Joel Sherrill
2014-09-13 15:25:21 UTC
Permalink
On Sep 11, 2014, at 10:03 , Sebastian Huber
Post by Sebastian Huber
Post by Peter Dufault
/* align + round down the buffer */
count &= ~ (DIRENT_HEADER_SIZE - 1);
di->len = count;
if (count > NFS_MAXDATA)
count = NFS_MAXDATA;
di->readdirargs.count = count;
Can someone who understands this comment on it?
Sorry, Peter. I don't have time to look at this at the moment. The
NFS code
Post by Sebastian Huber
is hard to understand. Since it works for me I have no urgent need
to spend
Post by Sebastian Huber
time on this.
/* Set to "preferred size" of this NFS client implementation */
buf->st_blksize = nfsStBlksize ? nfsStBlksize : fa->blocksize;
returns 4096, then is it a bug if transfers larger than 4K are
attempted?
Limiting all transfers to values other than the default 8K can end up
with memory corruption. 512B works, 1K works, 2K and 4K corrupt
things. The changes are minor (attached).
If it's a bug for transfers to be larger than 4K when fa->blocksize=4K
I can look at that, that's easy to trap when it happens.
Speaking purely from a defensive programming stance with no knowledge of what is possible in NFS, I would put the trap in. It is a case the codes will do bad things if it happens. So it could be a case of this should never happen or this is an odd configuration the code does not support. Either way, better to detect and report than tromp on memory.

If your work simplifies the code, that's always a good thing but certainly not a requirement to address the issues.
Peter
-----------------
Peter Dufault
HD Associates, Inc. Software and System Engineering
--joel
Peter Dufault
2014-09-15 11:45:57 UTC
Permalink
Post by Joel Sherrill
Post by Peter Dufault
If it's a bug for transfers to be larger than 4K when fa->blocksize=4K
I can look at that, that's easy to trap when it happens.
Speaking purely from a defensive programming stance with no knowledge of what is possible in NFS, I would put the trap in. It is a case the codes will do bad things if it happens. So it could be a case of this should never happen or this is an odd configuration the code does not support. Either way, better to detect and report than tromp on memory.
Because there is no buffer cache the transfer size in a read or write call is passed directly to the driver. In particular, the RTEMS shell cp command uses 8K while the newlib stdio gets the underlying block size. So in the RTEMS case you will get arbitrary sized reads and write calls to the NFS code.

Peter
-----------------
Peter Dufault
HD Associates, Inc. Software and System Engineering

Loading...