[Open-FCoE] [PATCH 00/28] rport/lport/ns rework

Robert Love robert.w.love at intel.com
Wed Oct 1 23:24:53 UTC 2008


On Wed, 2008-10-01 at 15:37 -0500, Mike Christie wrote:
> Mike Christie wrote:
> > Robert Love wrote:
> >> The following series implements a change to the rport and lport state 
> >> machines and
> >> applies a locking strategy to that state machine.
> >>
> >> 1) A callback function is added to the LP for RP events (READY, 
> >> FAILED, LOGO)
> >> 2) GNN_ID and GPN_ID are removed, rport state machine starts at PLOGI
> >>    - uses a dummy rport until PLOGI response
> >> 3) A READY state is added to the RP state machine
> >> 4) The RP and LP state machines locking has been given a strategy
> >> 5) RP only has one error/retry handling routine
> >> 6) LP only has one error/retry handling routine
> >> 7) Naming conventions are changed
> >>     rport (remote port) = fc_rport pointer
> >>     rdata (remote port private) = fc_rport_libfc_priv pointer
> >>     lport (local port) = fc_lport_pointer
> >> 8) All interaction with scsi_transport_fc happens in fc_rport.c. Users of
> >>    the RP state machine only create dummy rports and call rport_login(),
> >>    the RP state machine deals with adding to the transport layer and
> >>    deleting correctly.
> >>
> >> There are some issues that I've found while working on this patch that 
> >> will
> >> need to be fixed.
> >>
> >> 1) probably not cancelling work threads appropriately
> >> 2) What happens if discovery fails? Currently it's silent.
> >>    Should discovery be pulled into the lport state machine?
> >> 3) reset is broken
> >> 4) There needs to be a little more commenting about the locking
> >> 5) Some of the #define variables need to be named better. In particular
> >>    I don't like LPORT_RPORT_EV_* or the function lport_rport_event().
> >>
> > 
> > 6). I think my mutex idea was a dud or maybe it needs two more changes. 
> > The mutex is a problem currently because fc_exch.c can call back into 
> > fc_rport.c or fc_lport.c by calling the ep->resp function. This is 
> > causing problems when fc_exch_timeout calls the resp handler from the 
> > timer context. We talked about this before and we can handle it by 
> > queueing the timeout in delayed work.
> > 
We just had a long talk about all of this and we think this is do-able.
Vasu is going to do a little investigation, but we don't for-see any
issues. Since the response handlers are usually called from process
context (fcoe recv thread) they're not going to have to change if the EM
timeout is converted from irq to process context.

> > A new problem is due to the termiate_rport_io callback. If 
> > fc_rport.c:fc_rport_terminate_io is run from scsi_transport_fc.c's host 
> > work queue (work is queue by calling fc_queue_work), then it calls
> > exch_mgr_reset and this calls a ep->resp function which calls back into 
> > fc_rport.c then it could try to grab the rp_mutex. If another thread is 
> > calling fc_remote_port_add on the same rport, that thread will be 
> > holding the rp_mutex, and so when fc_remote_port_add calls fc_flush_work 
> > we will lock up because the flush call will be waiting on the workqueue 
> > that is calling fc_rport_terminate_io which is waiting for the mutex we 
> > are holding.
> > 
This is definitely a problem. :)

> Do we need to hold the mutex to sync up additions and deltions? This is 

Yes, this was the original motivation, but I don't think the code
satisfies this concern as it is.

The comment, "This routine assumes no locks are held on entry." in the
comment blocks above fc_remote_port_add/delete concerns me. I'm not sure
if it means the Scsi_Host lock, spin locks or any locking (mutexes
included). I'm sure that we can't hold the Scsi_Host lock, because
add/delete is going to try and acquire it, but I'm not sure about the
other synchronization types. So, I think that scared me from holding the
mutex when calling add/delete. Right now the code drops the rport mutex
and then calls add/delete which creates a race.

> one of the reasons right? For this second problem I think we can just 
> take a page out of the other fc LLDs and what I stole from them for 
> iscsi. It looks like what other LLDs do (at least qla2xxx and maybe 
> lpfc) is that they have a single thread which will handle these events 
> so all the addition and deletion calls to the fc class are synchronized 
> through it. For iscsi we had less baggage and have a similar problem, so 
> I just made a common event thread in the transport class to handle this 
> for all iscsi drivers so each LLD did not have to duplicate the threading.
> 
> Does this help or is it way off? I just started going over the changes 

This helps! If I queue up the adds/deletes then they become serialized
and I don't have to worry about the race where I'm adding and deleting
at the same time. I can also flush the work queue in reset, which might
help a little.

Also, one thing I really dislike about the current code is that I'm
doing the add/delete in the fc_rport_unlock function. (I'm also sending
events there, I'll get to that in a moment) So, if I'm scheduling the
add/deletes then I can do that in _enter_ready()/_enter_logo() with the
mutex held and I can get that logic out of fc_rport_unlock().

I'd like to get the event sending out of fc_rport_unlock() too, but let
me tackle these existing issues first.

I've also got some local patches to fix up reset that I'm working on now
and will push out soon. I'm creating a new member of the lport that's a
list of rports logged into by the lport. I couldn't use the fc_host list
of rports like I originally wanted because of Scsi_Host locking issues
when traversing the list and _delete()ing in the reset case.

> with Chris's changes too so I am not sure.


> _______________________________________________
> devel mailing list
> devel at open-fcoe.org
> http://www.open-fcoe.org/mailman/listinfo/devel




More information about the devel mailing list