[Open-FCoE] [PATCH] RFC fcoe: flush per-cpu thread work when destroying interface

Joe Eykholt jeykholt at cisco.com
Mon Jul 27 16:50:56 UTC 2009


I occasionally get a problem when unloading libfc that the
exchange manager pool doesn't have all items freed.

The existing WARN_ON(mp->total_exches <= 0) isn't hit.
However, note that total_exches is decremented when the
exchange is completed, and it can be held with a refcnt
for a while after that.

I'm not sure what the offending exchange is, but I suspect
it is an incoming request, because outgoing state machines
should be all stopped at this point.

Note that although receive is stopped before the exchange
manager is freed, there could still be active threads
handling received frames.

Maybe we have to do something similar to flush_scheduled_work() that
makes sure that the percpu receive threads have completed
any packets for this instance.

One thought is to use work threads instead of the fcoe
per-cpu receive threads.  That's a larger patch, but it
would simplify this and make the management of CPU events
be taken care of by the worker mechanism.  One problem
with this is it would require a work_struct to either be
part of the skb (it isn't and it won't fit in the skb->cb),
or we'd have to allocate it separately, or make a work-item
with a queue per interface instance.  That latter is clean,
but it may make it hard to provide fairness among multiple
instances.  A single queue for all instances is better.

So, back to the existing model.  One way to flush the queues
is to allocate a new skb and send it through, and have
the thread handle this new skb specially.
This would be similar to the way the work queues
are flushed now by putting work items in them and waiting
until they make it through the queue.

An skb->destructor function could be used to inform us of
the completion of the flush, but then we'd have to put
some dummy content in the skb that would be harmless
to fcoe_percpu_receive_thread().  There's already a check
for the lp being NULL.  That's unnecessary but we could
use that.  If lp is NULL it prints a message and frees
the skb.  We could skip printing the message in this case,
and we haven't added anything to the fast path.

Sending as an RFC because insufficiently tested (I run into
the 3-thread hang noted separately), and to get opinions.

not yet Signed-off-by: Joe Eykholt <jeykholt at cisco.com>
---
 drivers/scsi/fcoe/fcoe.c |   44 ++++++++++++++++++++++++++++++++++++++++++--
 1 files changed, 42 insertions(+), 2 deletions(-)


diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 4416acf..53d88f1 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -51,6 +51,9 @@ MODULE_LICENSE("GPL v2");
 
 DEFINE_MUTEX(fcoe_create_mutex);
 
+/* fcoe_percpu_clean completion.  Waiter protected by fcoe_create_mutex */
+static DECLARE_COMPLETION(fcoe_flush_completion);
+
 /* fcoe host list */
 /* must only by accessed under the RTNL mutex */
 LIST_HEAD(fcoe_hostlist);
@@ -823,7 +826,7 @@ static void fcoe_percpu_thread_create(unsigned int cpu)
 	thread = kthread_create(fcoe_percpu_receive_thread,
 				(void *)p, "fcoethread/%d", cpu);
 
-	if (likely(!IS_ERR(p->thread))) {
+	if (likely(!IS_ERR(thread))) {
 		kthread_bind(thread, cpu);
 		wake_up_process(thread);
 
@@ -1299,6 +1302,15 @@ int fcoe_xmit(struct fc_lport *lp, struct fc_frame *fp)
 }
 
 /**
+ * fcoe_percpu_flush_done() - Indicate percpu queue flush completion.
+ * @skb: the skb being completed.
+ */
+static void fcoe_percpu_flush_done(struct sk_buff *skb)
+{
+	complete(&fcoe_flush_completion);
+}
+
+/**
  * fcoe_percpu_receive_thread() - recv thread per cpu
  * @arg: ptr to the fcoe per cpu struct
  *
@@ -1337,7 +1349,10 @@ int fcoe_percpu_receive_thread(void *arg)
 		fr = fcoe_dev_from_skb(skb);
 		lp = fr->fr_dev;
 		if (unlikely(lp == NULL)) {
-			FCOE_NETDEV_DBG(skb->dev, "Invalid HBA Structure");
+			if (skb->destructor != fcoe_percpu_flush_done)
+				FCOE_NETDEV_DBG(skb->dev, "NULL lport in skb");
+			else
+				printk(KERN_INFO "fcoe: flush skb found\n");
 			kfree_skb(skb);
 			continue;
 		}
@@ -1797,6 +1812,13 @@ int fcoe_link_ok(struct fc_lport *lp)
 /**
  * fcoe_percpu_clean() - Clear the pending skbs for an lport
  * @lp: the fc_lport
+ *
+ * Must be called with fcoe_create_mutex held to single-thread completion.
+ *
+ * This flushes the pending skbs by adding a new skb to each queue and
+ * waiting until they are all freed.  This assures us that not only are
+ * there no packets that will be handled by the lport, but also that any
+ * threads already handling packet have returned.
  */
 void fcoe_percpu_clean(struct fc_lport *lp)
 {
@@ -1821,8 +1843,26 @@ void fcoe_percpu_clean(struct fc_lport *lp)
 				kfree_skb(skb);
 			}
 		}
+
+		if (!pp->thread || !cpu_online(cpu))
+			continue;
+
+		skb = dev_alloc_skb(0);
+		if (!skb) {
+			spin_unlock_bh(&pp->fcoe_rx_list.lock);
+			continue;
+		}
+		skb->destructor = fcoe_percpu_flush_done;
+
+		__skb_queue_tail(&pp->fcoe_rx_list, skb);
+		if (pp->fcoe_rx_list.qlen == 1)
+			wake_up_process(pp->thread);
 		spin_unlock_bh(&pp->fcoe_rx_list.lock);
+
+printk(KERN_INFO "fcoe: waiting cpu %d\n", cpu);
+		wait_for_completion(&fcoe_flush_completion);
 	}
+printk(KERN_INFO "fcoe: fcoe_percpu_clean done\n");
 }
 
 /**





More information about the devel mailing list