Polish docs

parent ea1d9548
...@@ -829,7 +829,7 @@ above values over longer time spans. But depending on how the cache is ...@@ -829,7 +829,7 @@ above values over longer time spans. But depending on how the cache is
used and tuned, that point might well be in the region of 70% and used and tuned, that point might well be in the region of 70% and
below. below.
The fact that fellow does not, by default, attempt to use each and The fact that `fellow` does not, by default, attempt to use each and
every byte of the available cache is a deliberate decision: every byte of the available cache is a deliberate decision:
To achieve optimal disk and network I/O throughput, object data should To achieve optimal disk and network I/O throughput, object data should
...@@ -840,15 +840,18 @@ better option. Also, it might be better to return a smaller region ...@@ -840,15 +840,18 @@ better option. Also, it might be better to return a smaller region
than to split a larger region, which could instead be used for a than to split a larger region, which could instead be used for a
larger object coming in later. larger object coming in later.
The *cram* parameter (see `xbuddy.tune()`_) controls this trade off: The *cram* parameter controls this trade off: If *cram* allows a
If *cram* allows a smaller segment, it is returned, otherwise the smaller segment, it is returned, otherwise the allocator needs to wait
allocator needs to wait for LRU to make room. for LRU to make room.
While higher absolute *cram* values improve space usage, they lead to While higher absolute *cram* values improve space usage, they lead to
higher fragmentation and might negatively impact performance. Positive higher fragmentation and might negatively impact performance. Positive
*cram* values avoid using larger free regions for smaller *cram* values avoid using larger free regions for smaller
requests. Negative *cram* values do not. requests. Negative *cram* values do not.
See `xbuddy.tune()`_ for additional explanations on *cram*, tuning for
`fellow` happens through `xfellowy.tune()`_.
Another factor is that the LRU algorithm pre-evicts segments and Another factor is that the LRU algorithm pre-evicts segments and
objects from cache until ``mem_reserve_chunks`` have been reserved objects from cache until ``mem_reserve_chunks`` have been reserved
......
...@@ -750,7 +750,7 @@ above values over longer time spans. But depending on how the cache is ...@@ -750,7 +750,7 @@ above values over longer time spans. But depending on how the cache is
used and tuned, that point might well be in the region of 70% and used and tuned, that point might well be in the region of 70% and
below. below.
The fact that fellow does not, by default, attempt to use each and The fact that `fellow` does not, by default, attempt to use each and
every byte of the available cache is a deliberate decision: every byte of the available cache is a deliberate decision:
To achieve optimal disk and network I/O throughput, object data should To achieve optimal disk and network I/O throughput, object data should
...@@ -761,15 +761,18 @@ better option. Also, it might be better to return a smaller region ...@@ -761,15 +761,18 @@ better option. Also, it might be better to return a smaller region
than to split a larger region, which could instead be used for a than to split a larger region, which could instead be used for a
larger object coming in later. larger object coming in later.
The *cram* parameter (see `xbuddy.tune()`_) controls this trade off: The *cram* parameter controls this trade off: If *cram* allows a
If *cram* allows a smaller segment, it is returned, otherwise the smaller segment, it is returned, otherwise the allocator needs to wait
allocator needs to wait for LRU to make room. for LRU to make room.
While higher absolute *cram* values improve space usage, they lead to While higher absolute *cram* values improve space usage, they lead to
higher fragmentation and might negatively impact performance. Positive higher fragmentation and might negatively impact performance. Positive
*cram* values avoid using larger free regions for smaller *cram* values avoid using larger free regions for smaller
requests. Negative *cram* values do not. requests. Negative *cram* values do not.
See `xbuddy.tune()`_ for additional explanations on *cram*, tuning for
`fellow` happens through `xfellowy.tune()`_.
Another factor is that the LRU algorithm pre-evicts segments and Another factor is that the LRU algorithm pre-evicts segments and
objects from cache until ``mem_reserve_chunks`` have been reserved objects from cache until ``mem_reserve_chunks`` have been reserved
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment