Project

General

Profile

Actions

Bug #225

closed

panic in pf_purge_expired_states

Added by corecode over 18 years ago. Updated almost 16 years ago.

Status:
Closed
Priority:
High
Assignee:
-
Category:
-
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:

Description

hey,

about every month i'm getting a panic in pf_purge_expired_states on some
RB_* function.

I talked to dhartmei and he said that there are no problems known to
this code, except for reentrancy issues.

So I wonder: how can the RB-tree be destroyed? Probably some race
between the timer and the network. But can that be? The timer runs in
a critical section, and shouldn't the network too?

several crash dumps available on request.

and clues?

thanks
simon

Actions #1

Updated by dillon over 18 years ago

:hey,
:
:about every month i'm getting a panic in pf_purge_expired_states on some
:RB_* function.
:
:I talked to dhartmei and he said that there are no problems known to
:this code, except for reentrancy issues.
:
:So I wonder: how can the RB-tree be destroyed? Probably some race
:between the timer and the network. But can that be? The timer runs in
:a critical section, and shouldn't the network too?
:
:several crash dumps available on request.
:
:and clues?
:
:thanks
: simon

Please upload them to your leaf account (or reupload them again since
the /build partition for non-permanent files had to be wiped).
I am very interested in wiping out the remaining infrastructure bugs.
-Matt
Matthew Dillon
<>
Actions #2

Updated by max over 18 years ago

On Sunday 02 July 2006 17:17, Simon 'corecode' Schubert wrote:

about every month i'm getting a panic in pf_purge_expired_states on some
RB_* function.

I talked to dhartmei and he said that there are no problems known to
this code, except for reentrancy issues.

So I wonder: how can the RB-tree be destroyed? Probably some race
between the timer and the network. But can that be? The timer runs in
a critical section, and shouldn't the network too?

several crash dumps available on request.

Can you show me a trace instead?

and clues?

We had rb-tree inconsistencies in FreeBSD as well. I think the basic behind
the fix was pf.c, rev. 1.35 but it took me some intermediate steps to get it
final.

Actions #3

Updated by corecode over 18 years ago

Max Laier wrote:

On Sunday 02 July 2006 17:17, Simon 'corecode' Schubert wrote:

about every month i'm getting a panic in pf_purge_expired_states on some
RB_* function.

Can you show me a trace instead?

sure (panics not always here):

#10 0xc578bc7f in pf_state_tree_lan_ext_RB_REMOVE (head=0xc56da9c0,
elm=0xc593a900)
at /usr/src/sys/net/pf/pf.c:272
#11 0xc578df19 in pf_purge_expired_states () at /usr/src/sys/net/pf/pf.c:814
#12 0xc578db91 in pf_purge_timeout (arg=0xc57aeaa8) at
/usr/src/sys/net/pf/pf.c:701
#13 0xc01a486f in softclock_handler (arg=0xc031e8e0) at
/usr/src/sys/kern/kern_timeout.c:308

We had rb-tree inconsistencies in FreeBSD as well. I think the basic behind
the fix was pf.c, rev. 1.35 but it took me some intermediate steps to get it
final.

yes, i thought that this might be a fix, but the machine is running as
bridge and not using synproxy.

cheers
simon

Actions #4

Updated by dillon over 18 years ago

:Max Laier wrote:
:> On Sunday 02 July 2006 17:17, Simon 'corecode' Schubert wrote:
:>> about every month i'm getting a panic in pf_purge_expired_states on some
:>> RB_* function.
:> Can you show me a trace instead?
:
:sure (panics not always here):
:
:#10 0xc578bc7f in pf_state_tree_lan_ext_RB_REMOVE (head=0xc56da9c0,
:elm=0xc593a900)
: at /usr/src/sys/net/pf/pf.c:272
:#11 0xc578df19 in pf_purge_expired_states () at /usr/src/sys/net/pf/pf.c:814
:#12 0xc578db91 in pf_purge_timeout (arg=0xc57aeaa8) at
:...

Well, I see two problems immediately.  It is calling pf_send_tcp()
before removing the item from the tree, which can trivally block
and break the critical section, and it is also calculating the 'next'
RB entry in the loop as well. Well, if pf_send_tcp() blocks then
the next RB entry can easily be ripped out from under the scan.
The solution is to (1) use DragonFly's RB_SCAN function, which safely
handles any ripouts that occur during the loop as long as the loop
is interlocked (which it is with a critical section), and (2) I'm
guessing we have to remove the entry from the various trees BEFORE
we call pf_send_tcp().
I am absolutely NOT an expert on PF, and I could only compile-test
this patch, not actually try it, so this patch might not work, or it
might work and not actually solve the problem.
-Matt

Index: pf.c ===================================================================
RCS file: /cvs/src/sys/net/pf/pf.c,v
retrieving revision 1.8
diff u -r1.8 pf.c
--
pf.c 11 Dec 2005 13:00:17 0000 1.8
+++ pf.c 4 Jul 2006 17:14:59 -0000
@ -793,51 +793,54 @
s
>src_node = s->nat_src_node = NULL;
}

-void
-pf_purge_expired_states(void)
+static int
+pf_purge_expired_states_callback(struct pf_state *cur, void *data __unused) {
- struct pf_state *cur, *next;

for (cur = RB_MIN(pf_state_tree_id, &tree_id);
- cur; cur = next) {
- next = RB_NEXT(pf_state_tree_id, &tree_id, cur);

if (pf_state_expires(cur) <= time_second) {
- if (cur->src.state == PF_TCPS_PROXY_DST)
- pf_send_tcp(cur->rule.ptr, cur->af,
- &cur->ext.addr, &cur->lan.addr,
- cur->ext.port, cur->lan.port,
- cur->src.seqhi, cur->src.seqlo + 1, 0,
- TH_RST|TH_ACK, 0, 0);
- RB_REMOVE(pf_state_tree_ext_gwy,
- &cur->u.s.kif->pfik_ext_gwy, cur);
- RB_REMOVE(pf_state_tree_lan_ext,
- &cur->u.s.kif->pfik_lan_ext, cur);
- RB_REMOVE(pf_state_tree_id, &tree_id, cur);
+ if (pf_state_expires(cur) <= time_second) {
+ RB_REMOVE(pf_state_tree_ext_gwy,
+ &cur->u.s.kif->pfik_ext_gwy, cur);
+ RB_REMOVE(pf_state_tree_lan_ext,
+ &cur->u.s.kif->pfik_lan_ext, cur);
+ RB_REMOVE(pf_state_tree_id, &tree_id, cur);
+ if (cur->src.state == PF_TCPS_PROXY_DST) {
+ pf_send_tcp(cur->rule.ptr, cur->af,
+ &cur->ext.addr, &cur->lan.addr,
+ cur->ext.port, cur->lan.port,
+ cur->src.seqhi, cur->src.seqlo + 1, 0,
+ TH_RST|TH_ACK, 0, 0);
+ }
#if NPFSYNC
- pfsync_delete_state(cur);
+ pfsync_delete_state(cur);
#endif
- pf_src_tree_remove_state(cur);
- if (--cur->rule.ptr->states <= 0 &&
- cur->rule.ptr->src_nodes <= 0)
- pf_rm_rule(NULL, cur->rule.ptr);
- if (cur->nat_rule.ptr != NULL)
- if (--cur->nat_rule.ptr->states <= 0 &&
- cur->nat_rule.ptr->src_nodes <= 0)
- pf_rm_rule(NULL, cur->nat_rule.ptr);
- if (cur->anchor.ptr != NULL)
- if (--cur->anchor.ptr->states <= 0)
- pf_rm_rule(NULL, cur->anchor.ptr);
- pf_normalize_tcp_cleanup(cur);
- pfi_detach_state(cur->u.s.kif);
- TAILQ_REMOVE(&state_updates, cur, u.s.entry_updates);
- pool_put(&pf_state_pl, cur);
- pf_status.fcounters[FCNT_STATE_REMOVALS]++;
- pf_status.states--;
- }
+ pf_src_tree_remove_state(cur);
+ if (--cur->rule.ptr->states <= 0 &&
+ cur->rule.ptr->src_nodes <= 0)
+ pf_rm_rule(NULL, cur->rule.ptr);
+ if (cur->nat_rule.ptr != NULL)
+ if (--cur->nat_rule.ptr->states <= 0 &&
+ cur->nat_rule.ptr->src_nodes <= 0)
+ pf_rm_rule(NULL, cur->nat_rule.ptr);
+ if (cur->anchor.ptr != NULL)
+ if (--cur->anchor.ptr->states <= 0)
+ pf_rm_rule(NULL, cur->anchor.ptr);
+ pf_normalize_tcp_cleanup(cur);
+ pfi_detach_state(cur->u.s.kif);
+ TAILQ_REMOVE(&state_updates, cur, u.s.entry_updates);
+ pool_put(&pf_state_pl, cur);
+ pf_status.fcounters[FCNT_STATE_REMOVALS]++;
+ pf_status.states--;
}
+ return(0);
}

void
+pf_purge_expired_states(void)
{
+ RB_SCAN(pf_state_tree_id, &tree_id, NULL,
+ pf_purge_expired_states_callback, NULL);
}

+
int
pf_tbladdr_setup(struct pf_ruleset *rs, struct pf_addr_wrap *aw) {
Index: pf.c ===================================================================
RCS file: /cvs/src/sys/net/pf/pf.c,v
retrieving revision 1.8
diff u -r1.8 pf.c
--
pf.c 11 Dec 2005 13:00:17 0000 1.8
+++ pf.c 4 Jul 2006 17:14:59 -0000
@ -793,51 +793,54 @
s
>src_node = s->nat_src_node = NULL;
}

-void
-pf_purge_expired_states(void)
+static int
+pf_purge_expired_states_callback(struct pf_state *cur, void *data __unused) {
- struct pf_state *cur, *next;

for (cur = RB_MIN(pf_state_tree_id, &tree_id);
- cur; cur = next) {
- next = RB_NEXT(pf_state_tree_id, &tree_id, cur);

if (pf_state_expires(cur) <= time_second) {
- if (cur->src.state == PF_TCPS_PROXY_DST)
- pf_send_tcp(cur->rule.ptr, cur->af,
- &cur->ext.addr, &cur->lan.addr,
- cur->ext.port, cur->lan.port,
- cur->src.seqhi, cur->src.seqlo + 1, 0,
- TH_RST|TH_ACK, 0, 0);
- RB_REMOVE(pf_state_tree_ext_gwy,
- &cur->u.s.kif->pfik_ext_gwy, cur);
- RB_REMOVE(pf_state_tree_lan_ext,
- &cur->u.s.kif->pfik_lan_ext, cur);
- RB_REMOVE(pf_state_tree_id, &tree_id, cur);
+ if (pf_state_expires(cur) <= time_second) {
+ RB_REMOVE(pf_state_tree_ext_gwy,
+ &cur->u.s.kif->pfik_ext_gwy, cur);
+ RB_REMOVE(pf_state_tree_lan_ext,
+ &cur->u.s.kif->pfik_lan_ext, cur);
+ RB_REMOVE(pf_state_tree_id, &tree_id, cur);
+ if (cur->src.state == PF_TCPS_PROXY_DST) {
+ pf_send_tcp(cur->rule.ptr, cur->af,
+ &cur->ext.addr, &cur->lan.addr,
+ cur->ext.port, cur->lan.port,
+ cur->src.seqhi, cur->src.seqlo + 1, 0,
+ TH_RST|TH_ACK, 0, 0);
+ }
#if NPFSYNC
- pfsync_delete_state(cur);
+ pfsync_delete_state(cur);
#endif
- pf_src_tree_remove_state(cur);
- if (--cur->rule.ptr->states <= 0 &&
- cur->rule.ptr->src_nodes <= 0)
- pf_rm_rule(NULL, cur->rule.ptr);
- if (cur->nat_rule.ptr != NULL)
- if (--cur->nat_rule.ptr->states <= 0 &&
- cur->nat_rule.ptr->src_nodes <= 0)
- pf_rm_rule(NULL, cur->nat_rule.ptr);
- if (cur->anchor.ptr != NULL)
- if (--cur->anchor.ptr->states <= 0)
- pf_rm_rule(NULL, cur->anchor.ptr);
- pf_normalize_tcp_cleanup(cur);
- pfi_detach_state(cur->u.s.kif);
- TAILQ_REMOVE(&state_updates, cur, u.s.entry_updates);
- pool_put(&pf_state_pl, cur);
- pf_status.fcounters[FCNT_STATE_REMOVALS]++;
- pf_status.states--;
- }
+ pf_src_tree_remove_state(cur);
+ if (--cur->rule.ptr->states <= 0 &&
+ cur->rule.ptr->src_nodes <= 0)
+ pf_rm_rule(NULL, cur->rule.ptr);
+ if (cur->nat_rule.ptr != NULL)
+ if (--cur->nat_rule.ptr->states <= 0 &&
+ cur->nat_rule.ptr->src_nodes <= 0)
+ pf_rm_rule(NULL, cur->nat_rule.ptr);
+ if (cur->anchor.ptr != NULL)
+ if (--cur->anchor.ptr->states <= 0)
+ pf_rm_rule(NULL, cur->anchor.ptr);
+ pf_normalize_tcp_cleanup(cur);
+ pfi_detach_state(cur->u.s.kif);
+ TAILQ_REMOVE(&state_updates, cur, u.s.entry_updates);
+ pool_put(&pf_state_pl, cur);
+ pf_status.fcounters[FCNT_STATE_REMOVALS]++;
+ pf_status.states--;
}
+ return(0);
}

void
+pf_purge_expired_states(void)
{
+ RB_SCAN(pf_state_tree_id, &tree_id, NULL,
+ pf_purge_expired_states_callback, NULL);
}

+
int
pf_tbladdr_setup(struct pf_ruleset *rs, struct pf_addr_wrap *aw) {

Actions #5

Updated by justin about 18 years ago

corecode reports no problems for an extended period; considered fixed unless
someone finds a way to reproduce it.

Actions #6

Updated by corecode almost 18 years ago

panic 10:

#20 0xc576ba59 in pf_state_tree_lan_ext_RB_REMOVE_COLOR (head=0xc56d46c0,
parent=0x0, elm=0x0)
at /usr/src/sys/net/pf/pf.c:272
#21 0xc576bc7f in pf_state_tree_lan_ext_RB_REMOVE (head=0xc56d46c0, elm=0xc5b8d400)
at /usr/src/sys/net/pf/pf.c:272
#22 0xc576de7e in pf_purge_expired_states_callback (cur=0xc5b8d400, data=0x0)
at /usr/src/sys/net/pf/pf.c:802
#23 0xc576cdb5 in pf_state_tree_id_RB_SCAN (head=0xc578e848,
scancmp=0xc576ccf9 <pf_state_tree_id_SCANCMP_ALL>,
callback=0xc576de38 <pf_purge_expired_states_callback>, data=0x0)
---Type <return> to continue, or q <return> to quit---
at /usr/src/sys/net/pf/pf.c:276
#24 0xc576e03e in pf_purge_expired_states () at /usr/src/sys/net/pf/pf.c:839
#25 0xc576db91 in pf_purge_timeout (arg=0xc578eac8) at /usr/src/sys/net/pf/pf.c:701

panic 9:

#20 0xc573ba59 in pf_state_tree_lan_ext_RB_REMOVE_COLOR (head=0xc56c9bc0,
parent=0x0, elm=0x0)
at /usr/src/sys/net/pf/pf.c:272
#21 0xc573bc7f in pf_state_tree_lan_ext_RB_REMOVE (head=0xc56c9bc0, elm=0xc585a400)
at /usr/src/sys/net/pf/pf.c:272
#22 0xc573de7e in pf_purge_expired_states_callback (cur=0xc585a400, data=0x0)
at /usr/src/sys/net/pf/pf.c:802
#23 0xc573cdb5 in pf_state_tree_id_RB_SCAN (head=0xc575e848,
scancmp=0xc573ccf9 <pf_state_tree_id_SCANCMP_ALL>,
callback=0xc573de38 <pf_purge_expired_states_callback>, data=0x0)
---Type <return> to continue, or q <return> to quit---
at /usr/src/sys/net/pf/pf.c:276
#24 0xc573e03e in pf_purge_expired_states () at /usr/src/sys/net/pf/pf.c:839
#25 0xc573db91 in pf_purge_timeout (arg=0xc575eac8) at /usr/src/sys/net/pf/pf.c:701

panic 8:

#20 0xc578c870 in pf_state_tree_id_RB_REMOVE_COLOR (head=0xc57ae848, parent=0x0,
elm=0x0)
at /usr/src/sys/net/pf/pf.c:276
#21 0xc578cc17 in pf_state_tree_id_RB_REMOVE (head=0xc57ae848, elm=0xc5dcfc00)
at /usr/src/sys/net/pf/pf.c:276
#22 0xc578de8e in pf_purge_expired_states_callback (cur=0xc5dcfc00, data=0x0)
at /usr/src/sys/net/pf/pf.c:804
#23 0xc578cdb5 in pf_state_tree_id_RB_SCAN (head=0xc57ae848,
scancmp=0xc578ccf9 <pf_state_tree_id_SCANCMP_ALL>,
callback=0xc578de38 <pf_purge_expired_states_callback>, data=0x0)
---Type <return> to continue, or q <return> to quit---
at /usr/src/sys/net/pf/pf.c:276
#24 0xc578e03e in pf_purge_expired_states () at /usr/src/sys/net/pf/pf.c:839
#25 0xc578db91 in pf_purge_timeout (arg=0xc57aeac8) at /usr/src/sys/net/pf/pf.c:701

panic 7:

#20 0xc572c0a4 in pf_state_tree_ext_gwy_RB_REMOVE_COLOR (head=0xc56c8dc8,
parent=0x0, elm=0x0)
at /usr/src/sys/net/pf/pf.c:274
#21 0xc572c44b in pf_state_tree_ext_gwy_RB_REMOVE (head=0xc56c8dc8, elm=0xc5eca700)
at /usr/src/sys/net/pf/pf.c:274
#22 0xc572de6a in pf_purge_expired_states_callback (cur=0xc5eca700, data=0x0)
at /usr/src/sys/net/pf/pf.c:800
#23 0xc572cdb5 in pf_state_tree_id_RB_SCAN (head=0xc574e848,
scancmp=0xc572ccf9 <pf_state_tree_id_SCANCMP_ALL>,
callback=0xc572de38 <pf_purge_expired_states_callback>, data=0x0)
---Type <return> to continue, or q <return> to quit---
at /usr/src/sys/net/pf/pf.c:276
#24 0xc572e03e in pf_purge_expired_states () at /usr/src/sys/net/pf/pf.c:839
#25 0xc572db91 in pf_purge_timeout (arg=0xc574eac8) at /usr/src/sys/net/pf/pf.c:701

all on a 1.4.4 system (patched i guess)

Actions #7

Updated by elekktretterr almost 18 years ago

Here we go again...

Damn i was hoping it got somehow fixed. :(

Petr

Actions #8

Updated by dillon almost 18 years ago

Still on a 1.4.x system? Can we get a kernel core?

-Matt
Actions #9

Updated by dillon almost 18 years ago

:
:Simon 'corecode' Schubert <> added the comment:
:
:panic 10:
:
:#20 0xc576ba59 in pf_state_tree_lan_ext_RB_REMOVE_COLOR (head=0xc56d46c0,
:parent=0x0, elm=0x0)
: at /usr/src/sys/net/pf/pf.c:272
:...

Also, what is the actual panic message and trap frame ?
-Matt
Actions #10

Updated by corecode almost 18 years ago

okay, more details on this one now:

- all panics happen in the RB_REMOVE functions called from pf_purge_expired_states/RB_SCAN().
- all are null pointer dereferences in RB_REMOVE_COLOR
- it seems this is due to a "broken" rb tree layout (i.e. one black parent only having one black child)
- all states had the expire approximately 15-20 seconds before
- all states had the expire 0-2 seconds after the creation
- all states were of type IPV6_ICMP
- all states had gwy and lan address set to a very strange kind of address, like ff02:5:0:0:0:1:ff00:2 (only the last two parts seem to change sometimes), the ext address was set to ::0 (except for one)
- all states had direction = PF_IN (except for this one exception)

so this looks quite like a software bug within pf, but I am open to other suggestions.

there indeed happens to be occasional ipv6 traffic on the wire.

cheers
simon

Actions #11

Updated by TGEN almost 18 years ago

ff02:: isn't that strange:

ff02::%ex0 link#1 U
ff02::%rtk0 link#2 U
ff02::%rtk1 link#3 U
ff02::%lo0 ::1 U
ff02::%vlan0 link#5 U
ff02::%gif0 link#6 U
ff02::%gre0 fe80::20a:5eff:fe45:1f02%gre0 U
ff02::%gre1 fe80::20a:5eff:fe45:1f02%gre1 U
ff02::%gre2 fe80::20a:5eff:fe45:1f02%gre2 U
ff02::%gre3 fe80::20a:5eff:fe45:1f02%gre3 U
ff02::%gre4 fe80::20a:5eff:fe45:1f02%gre4 U

Cheers,
--
Thomas E. Spanjaard

Actions #12

Updated by corecode over 17 years ago

Hey,

Just wanted to followup to note that this bug still exists:

Fatal trap 12: page fault while in kernel mode

Uptime: 30d2h59m26s

#6 0xc028c016 in calltrap () at /usr/src/sys/platform/pc32/i386/exception.s:783
#7 0xc56efc64 in pf_state_tree_id_RB_REMOVE_COLOR (head=0xc57113e8, parent=0xc5b92d00,
elm=0xc58d9800) at /usr/src/sys/net/pf/pf.c:276
#8 0xc56efe8a in pf_state_tree_id_RB_REMOVE (head=0xc57113e8, elm=0xc5c34500)
at /usr/src/sys/net/pf/pf.c:276
#9 0xc56f1159 in pf_purge_expired_states_callback (cur=0xc5c34500, data=0x0)
at /usr/src/sys/net/pf/pf.c:804
#10 0xc56f0028 in pf_state_tree_id_RB_SCAN (head=0xc57113e8,
scancmp=0xc56eff6c <pf_state_tree_id_SCANCMP_ALL>,
callback=0xc56f1103 <pf_purge_expired_states_callback>, data=0x0)
at /usr/src/sys/net/pf/pf.c:276
#11 0xc56f1309 in pf_purge_expired_states () at /usr/src/sys/net/pf/pf.c:839
#12 0xc56f0e5c in pf_purge_timeout (arg=0xc5711668) at /usr/src/sys/net/pf/pf.c:701
#13 0xc0196d2f in softclock_handler (arg=0xc0311a60) at /usr/src/sys/kern/kern_timeout.c:307

system is:

DragonFly bridget.fs.ei.tum.de 1.8.2-RELEASE DragonFly 1.8.2-RELEASE #0: Thu Jun 21 18:57:45 CEST 2007 :/usr/obj/usr/src/sys/BRIDGET i386

cheers
simon

Actions #13

Updated by corecode almost 16 years ago

magically stopped panicing

Actions

Also available in: Atom PDF