[Unbound-users] unbound should probably manage other RLIMITs as well...

Greg A. Woods woods at planix.ca
Wed Oct 28 22:29:04 UTC 2009


At Wed, 28 Oct 2009 14:44:00 +0000, Tony Finch <dot at dotat.at> wrote:
Subject: Re: [Unbound-users] unbound should probably manage other RLIMITs as well...
> 
> On Wed, 28 Oct 2009, W.C.A. Wijngaards wrote:
> >
> > > I think Unbound should probably try to manage other of its RLIMIT values
> > > besides just the number of open files.
> >
> > I am hesitant, based on 'code bloat' reasons to do this.  I see this is
> > something for BSD and Solaris mostly as Linux does not impose limits by
> > default it seems.

I really can't see the sense (in the server realm at least, and
especially outside of the low-end embedded world) of trying to use a
server in production that didn't support some kind of per-process
resource limit controls, and if I'm not mistaken all modern POSIX-like
server OS platforms do support such controls, and mostly in a way
directly compatible with getrlimit(2)/setrlimit(2).

Note that getrlimit() and setrlimit() have been part of the Single UNIX
Specification in Version 2 since 1997, that's well over a decade now:

	http://opengroup.org/onlinepubs/007908799/xsh/getrlimit.html

It's not really relevant whether any given kernel or environment does
set such limits by default or not, just so long as they can be set.


> Why not try to auto-size the various memory pools based on the memory
> rlimit, and warn if configured pool sizes are likely to exceed the rlimit?

Something like that would probably make a little bit more sense.

However if sizing could be controlled by simply growing until there's a
malloc() failure then that would solve lots of problems in what I think
would be the simplest way possible; i.e. eliminating all system
dependencies, eliminating all need for special in-application user
configuration controls, eliminating the need for estimating malloc()
overhead (since that can change per runtime instance), etc., etc., etc.

(of course that would mean allocating growing things in whole units so
that a failure could be rerouted to a reuse of an existing allocation,
and being sure to pre-allocate everything else needed at or near startup)

(and of course some initial table size ratios could be automatically
determined by first querying getrlimit() to see what total usage will be
allowed)

If you want to make things a little more complicated then you could
"trap" on malloc() failure and try to increase the soft limit (up to the
hard limit of course) in order to allow for retry on the current
allocation and then set the table size limits at the current usage.
Then you'd just have to work out some reasonable values to suggest to
admins so that they can give some headroom in the soft limit.

-- 
						Greg A. Woods
						Planix, Inc.

<woods at planix.com>       +1 416 218 0099        http://www.planix.com/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 186 bytes
Desc: not available
URL: <http://lists.nlnetlabs.nl/pipermail/unbound-users/attachments/20091028/44193d03/attachment.bin>


More information about the Unbound-users mailing list