[Open-FCoE] [PATCH 4/9] libfc: make fc_rport_priv the primary rport interface.

Joe Eykholt jeykholt at cisco.com
Mon Jul 6 20:44:36 UTC 2009


Joe Eykholt wrote:
> Robert Love wrote:
>> On Mon, 2009-06-29 at 18:13 -0700, Joe Eykholt wrote:
>>> The rport and discovery modules deal with remote ports
>>> before fc_remote_port_add() can be done, because the
>>> full set of rport identifiers is not known at early stages.
>>>
>>> In preparation for splitting the fc_rport/fc_rport_priv allocation,
>>> make fc_rport_priv the primary interface for the remote port and
>>> discovery engines.
>>>
>>> The FCP / SCSI layers still deal with fc_rport and
>>> fc_rport_libfc_priv, however.
>>>
>>> Signed-off-by: Joe Eykholt <jeykholt at cisco.com>
>>> ---
>> Hi Joe. I'm getting regular panics with this patch applied. I haven't
>> been able to narrow down the reproduction steps yet, but here are a few
>> different dumps that I've gotten. 
> 
> I can reproduce it also.  I must not have tested this individual patch,
> but after the whole series is applied, I don't think you'll see that issue.
> I know this isn't good practice.
> 
> I'll diagnose it and either resend this patch with a fix or resend
> the whole patchset.

The bug was pretty easy.  In fc_els_fill(), the ELS_LOGO case was checking
for rport NULL, but since rport is now computed from rdata, it should
check for rdata NULL.  That bug disappears in the next patch.

I'll resubmit the patchset after rebasing.

> 	Joe
> 
> 
> 
>> FYI- This is our fcoe-next tree. I've rebased it to Linus' 2.6.31-rc2
>> kernel, removed Yi's DDP patch and merged your patch into the EM rework.
>> I'm now popping your patches on and testing. I rolled back to the
>> previous patch and don't have any problems.
>>
>> I'll push the reworked tree (without your patches) and then try to
>> identify the problem.
>>
>>
>> Cleaned up a bit:
>>
>> BUG: unable to handle kernel paging request at fffffffffffffc20
>> fc_elsct_send+0x22c/0x56f [libfc]
>>
>>  __alloc_skb+0x66/0x15c
>>  fc_lport_logo_resp+0x0/0x171 [libfc]
>>  fc_lport_enter_logo+0xae/0xc7 [libfc]
>>  fc_fabric_logoff+0x50/0x6b [libfc]
>>  fcoe_if_destroy+0x64/0x1eb [fcoe]
>>  fcoe_exit+0x28/0x83 [fcoe]
>>  sys_delete_module+0x1d3/0x249
>>  audit_syscall_entry+0x1b8/0x1e4
>>  system_call_fastpath+0x16/0x1b
>>
>> ---
>>
>> A different panic in more detail.
>>
>>
>>
>> Jul  6 09:34:35 localhost kernel: [  869.905986]  sdc: unknown partition
>> table
>> Jul  6 09:34:35 localhost kernel: [  869.906881]  unknown partition
>> table
>> Jul  6 09:34:35 localhost kernel: [  869.909844] sd 34:0:0:0: [sdb]
>> Attached SCSI disk
>> Jul  6 09:34:35 localhost kernel: [  869.911727] sd 34:0:0:1: [sdc]
>> Attached SCSI disk
>>
>> Message from syslogd at localhost at Jul  6 09:34:38 ...
>>  kernel:[  872.796475] Oops: 0000 [#1] SMP
>>
>> Message from syslogd at localhost at Jul  6 09:34:38 ...
>>  kernel:[  872.796685] last sysfs
>> file: /sys/module/fcoe/parameters/destroy
>>
>> Message from syslogd at localhost at Jul  6 09:34:38 ...
>>  kernel:[  872.801123] Stack:
>>
>> Message from syslogd at localhost at Jul  6 09:34:38 ...
>>  kernel:[  872.802592] Call Trace:
>>
>> Message from syslogd at localhost at Jul  6 09:34:38 ...
>>  kernel:[  872.805793] Code: fe ff ff 00 bb 01 00 00 00 e9 a9 02 00 00
>> 31 c0 41 83 7a 68 1b 76 0\
>> b 49 8b 82 d8 00 00 00 48 83 c0 18 c7 00 00 00 00 00 c6 00 0e <44> 8b a6
>> 20 fc ff ff 41 bd 22 00\
>>  00 00 eb ca 45 31 c0 49 8b 52
>>
>> Message from syslogd at localhost at Jul  6 09:34:38 ...
>>  kernel:[  872.807035] CR2: fffffffffffffc20
>> Jul  6 09:34:38 localhost kernel: [  872.780062] kjournald starting.
>> Commit interval 5 seconds
>> Jul  6 09:34:38 localhost kernel: [  872.780075] EXT3-fs warning:
>> maximal mount count reached, r\
>> unning e2fsck is recommended
>> Jul  6 09:34:38 localhost kernel: [  872.786787] EXT3 FS on sdb,
>> internal journal
>> Jul  6 09:34:38 localhost kernel: [  872.787018] EXT3-fs: mounted
>> filesystem with writeback data\
>>  mode.
>> Jul  6 09:34:38 localhost kernel: [  872.795819] BUG: unable to handle
>> kernel paging request at \
>> fffffffffffffc20
>> Jul  6 09:34:38 localhost kernel: [  872.796042] IP:
>> [<ffffffffa03e7b39>] fc_elsct_send+0x22c/0x\
>> 56f [libfc]
>> Jul  6 09:34:38 localhost kernel: [  872.796261] PGD 1003067 PUD 1004067
>> PMD 0
>> Jul  6 09:34:38 localhost kernel: [  872.796475] Oops: 0000 [#1] SMP
>> Jul  6 09:34:38 localhost kernel: [  872.796685] last sysfs
>> file: /sys/module/fcoe/parameters/de\
>> stroy
>> Jul  6 09:34:38 localhost kernel: [  872.796895] CPU 5
>> Jul  6 09:34:38 localhost kernel: [  872.797100] Modules linked in: fcoe
>> libfcoe libfc scsi_tran\
>> sport_fc netconsole ixgbe mdio [last unloaded: scsi_transport_fc]
>> Jul  6 09:34:38 localhost kernel: [  872.797542] Pid: 22440, comm:
>> fcoeadm Not tainted 2.6.31-rc\
>> 2 #1 X8DT3
>> Jul  6 09:34:38 localhost kernel: [  872.797752] RIP:
>> 0010:[<ffffffffa03e7b39>]  [<ffffffffa03e7\
>> b39>] fc_elsct_send+0x22c/0x56f [libfc]
>> Jul  6 09:34:38 localhost kernel: [  872.798178] RSP:
>> 0018:ffff88032789bd48  EFLAGS: 00010286
>> Jul  6 09:34:38 localhost kernel: [  872.798387] RAX: 109e3c211b000020
>> RBX: ffff88032ee7d5c8 RCX\
>> : 0000000000000000
>> Jul  6 09:34:38 localhost kernel: [  872.798600] RDX: fffffffffffffc00
>> RSI: 0000000000000000 RDI\
>> : ffff88033e1b6868
>> Jul  6 09:34:38 localhost kernel: [  872.798811] RBP: ffff88032789bdc8
>> R08: ffff88033e1b6858 R09\
>> : ffff88032ee7d5c8
>> Jul  6 09:34:38 localhost kernel: [  872.799024] R10: ffff88032f44b400
>> R11: 0000000000000001 R12\
>> : ffff88032f44b400
>> Jul  6 09:34:38 localhost kernel: [  872.799236] R13: ffff88033f96f050
>> R14: 00000000ffffffed R15\
>> : ffff88032ee7d5c8
>> Jul  6 09:34:38 localhost kernel: [  872.799450] FS:
>> 00007f321850e6f0(0000) GS:ffff880028106000\
>> (0000) knlGS:0000000000000000
>> Jul  6 09:34:38 localhost kernel: [  872.799861] CS:  0010 DS: 0000 ES:
>> 0000 CR0: 00000000800500\
>> 3b
>> Jul  6 09:34:38 localhost kernel: [  872.800075] CR2: fffffffffffffc20
>> CR3: 0000000327869000 CR4\
>> : 00000000000006a0
>> Jul  6 09:34:38 localhost kernel: [  872.800289] DR0: 0000000000000000
>> DR1: 0000000000000000 DR2\
>> : 0000000000000000
>> Jul  6 09:34:38 localhost kernel: [  872.800501] DR3: 0000000000000000
>> DR6: 00000000ffff0ff0 DR7\
>> : 0000000000000400
>> Jul  6 09:34:38 localhost kernel: [  872.800714] Process fcoeadm (pid:
>> 22440, threadinfo ffff880\
>> 32789a000, task ffff880322033030)
>> Jul  6 09:34:38 localhost kernel: [  872.801123] Stack:
>> Jul  6 09:34:38 localhost kernel: [  872.801325]  0000000000000020
>> 0000000000000000 ffff88032789\
>> bd98 ffffffff813f1a96
>> Jul  6 09:34:38 localhost kernel: [  872.801546] <0> ffff88032789bd78
>> 0000000000000000 000000000\
>> 0000028 ffff88033f96f050
>> Jul  6 09:34:38 localhost kernel: [  872.801968] <0> 00000000ffffffed
>> ffff88032ee7d5c8 ffffffffa\
>> 03ea647 ffff88032ee7d5c8
>> Jul  6 09:34:38 localhost kernel: [  872.802592] Call Trace:
>> Jul  6 09:34:38 localhost kernel: [  872.802800]  [<ffffffff813f1a96>] ?
>> __alloc_skb+0x66/0x15c
>> Jul  6 09:34:38 localhost kernel: [  872.803015]  [<ffffffffa03ea647>] ?
>> fc_lport_logo_resp+0x0/\
>> 0x171 [libfc]
>> Jul  6 09:34:38 localhost kernel: [  872.803229]  [<ffffffffa03ea58e>]
>> fc_lport_enter_logo+0xae/\
>> 0xc7 [libfc]
>> Jul  6 09:34:38 localhost kernel: [  872.803445]  [<ffffffffa03ea907>]
>> fc_fabric_logoff+0x50/0x6\
>> b [libfc]
>> Jul  6 09:34:38 localhost kernel: [  872.803659]  [<ffffffffa0403dfc>]
>> fcoe_if_destroy+0x64/0x1e\
>> b [fcoe]
>> Jul  6 09:34:38 localhost kernel: [  872.803877]  [<ffffffff8151ea01>] ?
>> _read_unlock+0x26/0x2b
>> Jul  6 09:34:38 localhost kernel: [  872.804088]  [<ffffffffa04021b7>] ?
>> fcoe_hostlist_lookup+0x\
>> 16/0x58 [fcoe]
>> Jul  6 09:34:38 localhost kernel: [  872.804306]  [<ffffffffa0403fb8>]
>> fcoe_destroy+0x35/0x75 [f\
>> coe]
>> Jul  6 09:34:38 localhost kernel: [  872.804520]  [<ffffffff81050554>]
>> param_attr_store+0x25/0x3\
>> 5
>> Jul  6 09:34:38 localhost kernel: [  872.804729]  [<ffffffff810505a9>]
>> module_attr_store+0x21/0x\
>> 25
>> Jul  6 09:34:38 localhost kernel: [  872.804945]  [<ffffffff811221ff>]
>> sysfs_write_file+0xe4/0x1\
>> 19
>> Jul  6 09:34:38 localhost kernel: [  872.805158]  [<ffffffff810ce8ea>]
>> vfs_write+0xae/0x16a
>> Jul  6 09:34:38 localhost kernel: [  872.805369]  [<ffffffff810cea6a>]
>> sys_write+0x47/0x6e
>> Jul  6 09:34:38 localhost kernel: [  872.805582]  [<ffffffff8100ba6b>]
>> system_call_fastpath+0x16\
>> /0x1b
>> Jul  6 09:34:38 localhost kernel: [  872.805793] Code: fe ff ff 00 bb 01
>> 00 00 00 e9 a9 02 00 00\
>>  31 c0 41 83 7a 68 1b 76 0b 49 8b 82 d8 00 00 00 48 83 c0 18 c7 00 00 00
>> 00 00 c6 00 0e <44> 8b \
>> a6 20 fc ff ff 41 bd 22 00 00 00 eb ca 45 31 c0 49 8b 52
>> Jul  6 09:34:38 localhost kernel: [  872.806612] RIP
>> [<ffffffffa03e7b39>] fc_elsct_send+0x22c/0\
>> x56f [libfc]
>> Jul  6 09:34:38 localhost kernel: [  872.806830]  RSP <ffff88032789bd48>
>> Jul  6 09:34:38 localhost kernel: [  872.807035] CR2: fffffffffffffc20
>> Jul  6 09:34:38 localhost kernel: [  872.807470] ---[ end trace
>> 25981f45fca25d77 ]---
>> Jul  6 09:35:38 localhost kernel: [  932.668192]  rport-34:0-1: blocked
>> FC remote port time out:\
>>  removing target and saving binding
>> Jul  6 09:35:38 localhost kernel: [  932.668662]  rport-34:0-0: blocked
>> FC remote port time out:\
>>  removing rport
>> Jul  6 09:40:40 localhost ntpd[2796]: time reset +0.394275 s
>>
>> ---
>>
>> and here's some warnings that just popped up.
>>
>> Jul  6 10:54:52 localhost kernel: [  262.811591]
>> =======================================================
>> Jul  6 10:54:52 localhost kernel: [  262.812072] [ INFO: possible
>> circular locking dependency detected ]
>> Jul  6 10:54:52 localhost kernel: [  262.812313] 2.6.31-rc2 #1
>> Jul  6 10:54:52 localhost kernel: [  262.812547]
>> -------------------------------------------------------
>> Jul  6 10:54:52 localhost kernel: [  262.812787] events/7/26 is trying
>> to acquire lock:
>> Jul  6 10:54:52 localhost kernel: [  262.813026]
>> (&mp->em_lock){+.-...}, at: [<ffffffffa004c41b>] fc_exch_alloc
>> +0xa1/0x26e [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.813571] 
>> Jul  6 10:54:52 localhost kernel: [  262.813572] but task is already
>> holding lock:
>> Jul  6 10:54:52 localhost kernel: [  262.814040]
>> (&ep->ex_lock){+.-...}, at: [<ffffffffa004d687>] fc_exch_timeout
>> +0x2c/0x2a1 [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.814583] 
>> Jul  6 10:54:52 localhost kernel: [  262.814584] which lock already
>> depends on the new lock.
>> Jul  6 10:54:52 localhost kernel: [  262.814584] 
>> Jul  6 10:54:52 localhost kernel: [  262.815280] 
>> Jul  6 10:54:52 localhost kernel: [  262.815280] the existing dependency
>> chain (in reverse order) is:
>> Jul  6 10:54:52 localhost kernel: [  262.815748] 
>> Jul  6 10:54:52 localhost kernel: [  262.815749] -> #1
>> (&ep->ex_lock){+.-...}:
>> Jul  6 10:54:52 localhost kernel: [  262.816310]
>> [<ffffffff810639c6>] __lock_acquire+0x132f/0x168f
>> Jul  6 10:54:52 localhost kernel: [  262.816587]
>> [<ffffffff81063de7>] lock_acquire+0xc1/0xe5
>> Jul  6 10:54:52 localhost kernel: [  262.816858]
>> [<ffffffff8151ec01>] _spin_lock_bh+0x31/0x3d
>> Jul  6 10:54:52 localhost kernel: [  262.817133]
>> [<ffffffffa004c496>] fc_exch_alloc+0x11c/0x26e [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.817411]
>> [<ffffffffa004d448>] fc_exch_seq_send+0x2a/0x23d [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.817690]
>> [<ffffffffa004de61>] fc_elsct_send+0x548/0x55b [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.817965]
>> [<ffffffffa004f3a7>] fc_lport_enter_flogi+0xae/0xc5 [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.818438]
>> [<ffffffffa004f6a8>] fc_linkup+0x5b/0x68 [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.818713]
>> [<ffffffffa0062a0e>] fcoe_ctlr_link_up+0x8b/0xa4 [libfcoe]
>> Jul  6 10:54:52 localhost kernel: [  262.818988]
>> [<ffffffffa0069d1c>] fcoe_create+0x7b8/0x834 [fcoe]
>> Jul  6 10:54:52 localhost kernel: [  262.819263]
>> [<ffffffff81050554>] param_attr_store+0x25/0x35
>> Jul  6 10:54:52 localhost kernel: [  262.819539]
>> [<ffffffff810505a9>] module_attr_store+0x21/0x25
>> Jul  6 10:54:52 localhost kernel: [  262.819813]
>> [<ffffffff811221ff>] sysfs_write_file+0xe4/0x119
>> Jul  6 10:54:52 localhost kernel: [  262.820088]
>> [<ffffffff810ce8ea>] vfs_write+0xae/0x16a
>> Jul  6 10:54:52 localhost kernel: [  262.820361]
>> [<ffffffff810cea6a>] sys_write+0x47/0x6e
>> Jul  6 10:54:52 localhost kernel: [  262.820631]
>> [<ffffffff8100ba6b>] system_call_fastpath+0x16/0x1b
>> Jul  6 10:54:52 localhost kernel: [  262.820907]
>> [<ffffffffffffffff>] 0xffffffffffffffff
>> Jul  6 10:54:52 localhost kernel: [  262.821183] 
>> Jul  6 10:54:52 localhost kernel: [  262.821184] -> #0
>> (&mp->em_lock){+.-...}:
>> Jul  6 10:54:52 localhost kernel: [  262.821750]
>> [<ffffffff810636fb>] __lock_acquire+0x1064/0x168f
>> Jul  6 10:54:52 localhost kernel: [  262.826900]
>> [<ffffffff81063de7>] lock_acquire+0xc1/0xe5
>> Jul  6 10:54:52 localhost kernel: [  262.827171]
>> [<ffffffff8151ec01>] _spin_lock_bh+0x31/0x3d
>> Jul  6 10:54:52 localhost kernel: [  262.827444]
>> [<ffffffffa004c41b>] fc_exch_alloc+0xa1/0x26e [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.827721]
>> [<ffffffffa004d448>] fc_exch_seq_send+0x2a/0x23d [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.827997]
>> [<ffffffffa004d7eb>] fc_exch_timeout+0x190/0x2a1 [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.828275]
>> [<ffffffff8104dc80>] worker_thread+0x1fa/0x30a
>> Jul  6 10:54:52 localhost kernel: [  262.828548]
>> [<ffffffff8105217a>] kthread+0x88/0x90
>> Jul  6 10:54:52 localhost kernel: [  262.828819]
>> [<ffffffff8100cb5a>] child_rip+0xa/0x20
>> Jul  6 10:54:52 localhost kernel: [  262.829091]
>> [<ffffffffffffffff>] 0xffffffffffffffff
>> Jul  6 10:54:52 localhost kernel: [  262.829363] 
>> Jul  6 10:54:52 localhost kernel: [  262.829364] other info that might
>> help us debug this:
>> Jul  6 10:54:52 localhost kernel: [  262.829364] 
>> Jul  6 10:54:52 localhost kernel: [  262.830062] 3 locks held by
>> events/7/26:
>> Jul  6 10:54:52 localhost kernel: [  262.830297]  #0:  (events){+.+.+.},
>> at: [<ffffffff8104dc29>] worker_thread+0x1a3/0x30a
>> Jul  6 10:54:52 localhost kernel: [  262.830864]  #1:
>> (&(&ep->timeout_work)->work){+.+...}, at: [<ffffffff8104dc29>]
>> worker_thread+0x1a3/0x30a
>> Jul  6 10:54:52 localhost kernel: [  262.831434]  #2:
>> (&ep->ex_lock){+.-...}, at: [<ffffffffa004d687>] fc_exch_timeout
>> +0x2c/0x2a1 [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.832008] 
>> Jul  6 10:54:52 localhost kernel: [  262.832008] stack backtrace:
>> Jul  6 10:54:52 localhost kernel: [  262.832474] Pid: 26, comm: events/7
>> Not tainted 2.6.31-rc2 #1
>> Jul  6 10:54:52 localhost kernel: [  262.832713] Call Trace:
>> Jul  6 10:54:52 localhost kernel: [  262.832948]  [<ffffffff810621fc>]
>> print_circular_bug_tail+0xc1/0xcc
>> Jul  6 10:54:52 localhost kernel: [  262.833189]  [<ffffffff810636fb>]
>> __lock_acquire+0x1064/0x168f
>> Jul  6 10:54:52 localhost kernel: [  262.833430]  [<ffffffff810a1a11>] ?
>> mempool_alloc_slab+0x11/0x13
>> Jul  6 10:54:52 localhost kernel: [  262.833670]  [<ffffffff810619e5>] ?
>> trace_hardirqs_on_caller+0xf9/0x13e
>> Jul  6 10:54:52 localhost kernel: [  262.833910]  [<ffffffff81063de7>]
>> lock_acquire+0xc1/0xe5
>> Jul  6 10:54:52 localhost kernel: [  262.834153]  [<ffffffffa004c41b>] ?
>> fc_exch_alloc+0xa1/0x26e [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.834397]  [<ffffffffa004bfde>] ?
>> fc_exch_rrq_resp+0x0/0xfd [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.834639]  [<ffffffff8151ec01>]
>> _spin_lock_bh+0x31/0x3d
>> Jul  6 10:54:52 localhost kernel: [  262.834880]  [<ffffffffa004c41b>] ?
>> fc_exch_alloc+0xa1/0x26e [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.835123]  [<ffffffffa004c41b>]
>> fc_exch_alloc+0xa1/0x26e [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.835366]  [<ffffffff813f2696>] ?
>> dev_alloc_skb+0x16/0x2c
>> Jul  6 10:54:52 localhost kernel: [  262.835607]  [<ffffffffa004bfde>] ?
>> fc_exch_rrq_resp+0x0/0xfd [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.835850]  [<ffffffffa004d448>]
>> fc_exch_seq_send+0x2a/0x23d [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.836094]  [<ffffffffa004d7eb>]
>> fc_exch_timeout+0x190/0x2a1 [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.836336]  [<ffffffff8104dc80>]
>> worker_thread+0x1fa/0x30a
>> Jul  6 10:54:52 localhost kernel: [  262.836577]  [<ffffffff8104dc29>] ?
>> worker_thread+0x1a3/0x30a
>> Jul  6 10:54:52 localhost kernel: [  262.836821]  [<ffffffffa004d65b>] ?
>> fc_exch_timeout+0x0/0x2a1 [libfc]
>> Jul  6 10:54:52 localhost kernel: [  262.837064]  [<ffffffff810524a7>] ?
>> autoremove_wake_function+0x0/0x38
>> Jul  6 10:54:52 localhost kernel: [  262.837306]  [<ffffffff81061a37>] ?
>> trace_hardirqs_on+0xd/0xf
>> Jul  6 10:54:52 localhost kernel: [  262.837547]  [<ffffffff8104da86>] ?
>> worker_thread+0x0/0x30a
>> Jul  6 10:54:52 localhost kernel: [  262.837787]  [<ffffffff8105217a>]
>> kthread+0x88/0x90
>> Jul  6 10:54:52 localhost kernel: [  262.838026]  [<ffffffff8100cb5a>]
>> child_rip+0xa/0x20
>> Jul  6 10:54:52 localhost kernel: [  262.838265]  [<ffffffff81036484>] ?
>> finish_task_switch+0x3b/0xe3
>> Jul  6 10:54:52 localhost kernel: [  262.838504]  [<ffffffff8100c53c>] ?
>> restore_args+0x0/0x30
>> Jul  6 10:54:52 localhost kernel: [  262.838744]  [<ffffffff810520f2>] ?
>> kthread+0x0/0x90
>> Jul  6 10:54:52 localhost kernel: [  262.838983]  [<ffffffff8100cb50>] ?
>> child_rip+0x0/0x20
>> Jul  6 10:55:32 localhost kernel: [  302.715854] sd 6:0:0:0: [sdb]
>> Unhandled error code
>> Jul  6 10:55:32 localhost kernel: [  302.716103] sd 6:0:0:0: [sdb]
>> Result: hostbyte=DID_BUS_BUSY driverbyte=DRIVER_OK
>> Jul  6 10:55:32 localhost kernel: [  302.716583] end_request: I/O error,
>> dev sdb, sector 0
>> Jul  6 10:55:32 localhost kernel: [  302.716822] Buffer I/O error on
>> device sdb, logical block 0
>> Jul  6 10:55:32 localhost kernel: [  302.717061] lost page write due to
>> I/O error on sdb
>> Jul  6 10:55:32 localhost kernel: [  302.717319] EXT3 FS on sdb,
>> internal journal
>> Jul  6 10:55:32 localhost kernel: [  302.717593] EXT3-fs: mounted
>> filesystem with writeback data mode.
>> Jul  6 10:55:32 localhost kernel: [  302.722565] ------------[ cut
>> here ]------------
>> Jul  6 10:55:32 localhost kernel: [  302.722810] WARNING: at
>> fs/buffer.c:1152 mark_buffer_dirty+0x2b/0x86()
>> Jul  6 10:55:32 localhost kernel: [  302.723053] Hardware name: X8DT3
>> Jul  6 10:55:32 localhost kernel: [  302.723292] Modules linked in: fcoe
>> libfcoe libfc scsi_transport_fc netconsole ixgbe mdio [last unloaded:
>> scsi_wait_scan]
>> Jul  6 10:55:32 localhost kernel: [  302.724031] Pid: 20453, comm:
>> umount Not tainted 2.6.31-rc2 #1
>> Jul  6 10:55:32 localhost kernel: [  302.724270] Call Trace:
>> Jul  6 10:55:32 localhost kernel: [  302.724507]  [<ffffffff810eed02>] ?
>> mark_buffer_dirty+0x2b/0x86
>> Jul  6 10:55:32 localhost kernel: [  302.724750]  [<ffffffff8103ca6a>]
>> warn_slowpath_common+0x77/0xa4
>> Jul  6 10:55:32 localhost kernel: [  302.724991]  [<ffffffff8103caa6>]
>> warn_slowpath_null+0xf/0x11
>> Jul  6 10:55:32 localhost kernel: [  302.725230]  [<ffffffff810eed02>]
>> mark_buffer_dirty+0x2b/0x86
>> Jul  6 10:55:32 localhost kernel: [  302.725471]  [<ffffffff8113292e>]
>> ext3_put_super+0x88/0x220
>> Jul  6 10:55:32 localhost kernel: [  302.725711]  [<ffffffff810d0385>]
>> generic_shutdown_super+0x58/0xd7
>> Jul  6 10:55:32 localhost kernel: [  302.725952]  [<ffffffff810d0426>]
>> kill_block_super+0x22/0x3a
>> Jul  6 10:55:32 localhost kernel: [  302.726192]  [<ffffffff810d0b3b>]
>> deactivate_super+0x68/0x7d
>> Jul  6 10:55:32 localhost kernel: [  302.726435]  [<ffffffff810e4923>]
>> mntput_no_expire+0xbb/0xf8
>> Jul  6 10:55:32 localhost kernel: [  302.726675]  [<ffffffff810e4ee4>]
>> sys_umount+0x2c3/0x2f2
>> Jul  6 10:55:32 localhost kernel: [  302.726916]  [<ffffffff8100ba6b>]
>> system_call_fastpath+0x16/0x1b
>> Jul  6 10:55:32 localhost kernel: [  302.727156] ---[ end trace
>> f6a7898dc6bf90a9 ]---
>>
>>
>>
>>
>>
>>
>>
>>
>>>  drivers/scsi/libfc/fc_disc.c  |   95 ++++-------
>>>  drivers/scsi/libfc/fc_elsct.c |    4 
>>>  drivers/scsi/libfc/fc_fcp.c   |    2 
>>>  drivers/scsi/libfc/fc_lport.c |   26 +--
>>>  drivers/scsi/libfc/fc_rport.c |  364 ++++++++++++++++++++---------------------
>>>  include/scsi/fc_encode.h      |    5 -
>>>  include/scsi/libfc.h          |   26 ++-
>>>  7 files changed, 244 insertions(+), 278 deletions(-)
>>>
>>>
>>> diff --git a/drivers/scsi/libfc/fc_disc.c b/drivers/scsi/libfc/fc_disc.c
>>> index ecc625c..448ffc3 100644
>>> --- a/drivers/scsi/libfc/fc_disc.c
>>> +++ b/drivers/scsi/libfc/fc_disc.c
>>> @@ -49,7 +49,6 @@ static void fc_disc_gpn_ft_req(struct fc_disc *);
>>>  static void fc_disc_gpn_ft_resp(struct fc_seq *, struct fc_frame *, void *);
>>>  static int fc_disc_new_target(struct fc_disc *, struct fc_rport *,
>>>  			      struct fc_rport_identifiers *);
>>> -static void fc_disc_del_target(struct fc_disc *, struct fc_rport *);
>>>  static void fc_disc_done(struct fc_disc *);
>>>  static void fc_disc_timeout(struct work_struct *);
>>>  static void fc_disc_single(struct fc_disc *, struct fc_disc_port *);
>>> @@ -60,27 +59,19 @@ static void fc_disc_restart(struct fc_disc *);
>>>   * @lport: Fibre Channel host port instance
>>>   * @port_id: remote port port_id to match
>>>   */
>>> -struct fc_rport *fc_disc_lookup_rport(const struct fc_lport *lport,
>>> -				      u32 port_id)
>>> +struct fc_rport_priv *fc_disc_lookup_rport(const struct fc_lport *lport,
>>> +					   u32 port_id)
>>>  {
>>>  	const struct fc_disc *disc = &lport->disc;
>>> -	struct fc_rport *rport, *found = NULL;
>>> +	struct fc_rport *rport;
>>>  	struct fc_rport_priv *rdata;
>>> -	int disc_found = 0;
>>>  
>>>  	list_for_each_entry(rdata, &disc->rports, peers) {
>>>  		rport = PRIV_TO_RPORT(rdata);
>>> -		if (rport->port_id == port_id) {
>>> -			disc_found = 1;
>>> -			found = rport;
>>> -			break;
>>> -		}
>>> +		if (rport->port_id == port_id)
>>> +			return rdata;
>>>  	}
>>> -
>>> -	if (!disc_found)
>>> -		found = NULL;
>>> -
>>> -	return found;
>>> +	return NULL;
>>>  }
>>>  
>>>  /**
>>> @@ -93,21 +84,18 @@ struct fc_rport *fc_disc_lookup_rport(const struct fc_lport *lport,
>>>  void fc_disc_stop_rports(struct fc_disc *disc)
>>>  {
>>>  	struct fc_lport *lport;
>>> -	struct fc_rport *rport;
>>>  	struct fc_rport_priv *rdata, *next;
>>>  
>>>  	lport = disc->lport;
>>>  
>>>  	mutex_lock(&disc->disc_mutex);
>>>  	list_for_each_entry_safe(rdata, next, &disc->rports, peers) {
>>> -		rport = PRIV_TO_RPORT(rdata);
>>>  		list_del(&rdata->peers);
>>> -		lport->tt.rport_logoff(rport);
>>> +		lport->tt.rport_logoff(rdata);
>>>  	}
>>>  
>>>  	list_for_each_entry_safe(rdata, next, &disc->rogue_rports, peers) {
>>> -		rport = PRIV_TO_RPORT(rdata);
>>> -		lport->tt.rport_logoff(rport);
>>> +		lport->tt.rport_logoff(rdata);
>>>  	}
>>>  
>>>  	mutex_unlock(&disc->disc_mutex);
>>> @@ -116,18 +104,18 @@ void fc_disc_stop_rports(struct fc_disc *disc)
>>>  /**
>>>   * fc_disc_rport_callback() - Event handler for rport events
>>>   * @lport: The lport which is receiving the event
>>> - * @rport: The rport which the event has occured on
>>> + * @rdata: private remote port data
>>>   * @event: The event that occured
>>>   *
>>>   * Locking Note: The rport lock should not be held when calling
>>>   *		 this function.
>>>   */
>>>  static void fc_disc_rport_callback(struct fc_lport *lport,
>>> -				   struct fc_rport *rport,
>>> +				   struct fc_rport_priv *rdata,
>>>  				   enum fc_rport_event event)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>>  	struct fc_disc *disc = &lport->disc;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  
>>>  	FC_DISC_DBG(disc, "Received a %d event for port (%6x)\n", event,
>>>  		    rport->port_id);
>>> @@ -169,7 +157,6 @@ static void fc_disc_recv_rscn_req(struct fc_seq *sp, struct fc_frame *fp,
>>>  				  struct fc_disc *disc)
>>>  {
>>>  	struct fc_lport *lport;
>>> -	struct fc_rport *rport;
>>>  	struct fc_rport_priv *rdata;
>>>  	struct fc_els_rscn *rp;
>>>  	struct fc_els_rscn_page *pp;
>>> @@ -249,11 +236,10 @@ static void fc_disc_recv_rscn_req(struct fc_seq *sp, struct fc_frame *fp,
>>>  			    redisc, lport->state, disc->pending);
>>>  		list_for_each_entry_safe(dp, next, &disc_ports, peers) {
>>>  			list_del(&dp->peers);
>>> -			rport = lport->tt.rport_lookup(lport, dp->ids.port_id);
>>> -			if (rport) {
>>> -				rdata = rport->dd_data;
>>> +			rdata = lport->tt.rport_lookup(lport, dp->ids.port_id);
>>> +			if (rdata) {
>>>  				list_del(&rdata->peers);
>>> -				lport->tt.rport_logoff(rport);
>>> +				lport->tt.rport_logoff(rdata);
>>>  			}
>>>  			fc_disc_single(disc, dp);
>>>  		}
>>> @@ -308,16 +294,14 @@ static void fc_disc_recv_req(struct fc_seq *sp, struct fc_frame *fp,
>>>   */
>>>  static void fc_disc_restart(struct fc_disc *disc)
>>>  {
>>> -	struct fc_rport *rport;
>>>  	struct fc_rport_priv *rdata, *next;
>>>  	struct fc_lport *lport = disc->lport;
>>>  
>>>  	FC_DISC_DBG(disc, "Restarting discovery\n");
>>>  
>>>  	list_for_each_entry_safe(rdata, next, &disc->rports, peers) {
>>> -		rport = PRIV_TO_RPORT(rdata);
>>>  		list_del(&rdata->peers);
>>> -		lport->tt.rport_logoff(rport);
>>> +		lport->tt.rport_logoff(rdata);
>>>  	}
>>>  
>>>  	disc->requested = 1;
>>> @@ -335,6 +319,7 @@ static void fc_disc_start(void (*disc_callback)(struct fc_lport *,
>>>  						enum fc_disc_event),
>>>  			  struct fc_lport *lport)
>>>  {
>>> +	struct fc_rport_priv *rdata;
>>>  	struct fc_rport *rport;
>>>  	struct fc_rport_identifiers ids;
>>>  	struct fc_disc *disc = &lport->disc;
>>> @@ -362,8 +347,9 @@ static void fc_disc_start(void (*disc_callback)(struct fc_lport *,
>>>  	 * Handle point-to-point mode as a simple discovery
>>>  	 * of the remote port. Yucky, yucky, yuck, yuck!
>>>  	 */
>>> -	rport = disc->lport->ptp_rp;
>>> -	if (rport) {
>>> +	rdata = disc->lport->ptp_rp;
>>> +	if (rdata) {
>>> +		rport = PRIV_TO_RPORT(rdata);
>>>  		ids.port_id = rport->port_id;
>>>  		ids.port_name = rport->port_name;
>>>  		ids.node_name = rport->node_name;
>>> @@ -418,7 +404,9 @@ static int fc_disc_new_target(struct fc_disc *disc,
>>>  			 * assigned the same FCID.  This should be rare.
>>>  			 * Delete the old one and fall thru to re-create.
>>>  			 */
>>> -			fc_disc_del_target(disc, rport);
>>> +			rdata = rport->dd_data;
>>> +			list_del(&rdata->peers);
>>> +			lport->tt.rport_logoff(rdata);
>>>  			rport = NULL;
>>>  		}
>>>  	}
>>> @@ -426,38 +414,27 @@ static int fc_disc_new_target(struct fc_disc *disc,
>>>  	    ids->port_id != fc_host_port_id(lport->host) &&
>>>  	    ids->port_name != lport->wwpn) {
>>>  		if (!rport) {
>>> -			rport = lport->tt.rport_lookup(lport, ids->port_id);
>>> +			rdata = lport->tt.rport_lookup(lport, ids->port_id);
>>>  			if (!rport) {
>>> -				rport = lport->tt.rport_create(lport, ids);
>>> +				rdata = lport->tt.rport_create(lport, ids);
>>>  			}
>>> -			if (!rport)
>>> +			if (!rdata)
>>>  				error = -ENOMEM;
>>> +			else
>>> +				rport = PRIV_TO_RPORT(rdata);
>>>  		}
>>>  		if (rport) {
>>>  			rdata = rport->dd_data;
>>>  			rdata->ops = &fc_disc_rport_ops;
>>>  			rdata->rp_state = RPORT_ST_INIT;
>>>  			list_add_tail(&rdata->peers, &disc->rogue_rports);
>>> -			lport->tt.rport_login(rport);
>>> +			lport->tt.rport_login(rdata);
>>>  		}
>>>  	}
>>>  	return error;
>>>  }
>>>  
>>>  /**
>>> - * fc_disc_del_target() - Delete a target
>>> - * @disc: FC discovery context
>>> - * @rport: The remote port to be removed
>>> - */
>>> -static void fc_disc_del_target(struct fc_disc *disc, struct fc_rport *rport)
>>> -{
>>> -	struct fc_lport *lport = disc->lport;
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> -	list_del(&rdata->peers);
>>> -	lport->tt.rport_logoff(rport);
>>> -}
>>> -
>>> -/**
>>>   * fc_disc_done() - Discovery has been completed
>>>   * @disc: FC discovery context
>>>   * Locking Note: This function expects that the disc mutex is locked before
>>> @@ -573,7 +550,6 @@ static int fc_disc_gpn_ft_parse(struct fc_disc *disc, void *buf, size_t len)
>>>  	size_t tlen;
>>>  	int error = 0;
>>>  	struct fc_rport_identifiers ids;
>>> -	struct fc_rport *rport;
>>>  	struct fc_rport_priv *rdata;
>>>  
>>>  	lport = disc->lport;
>>> @@ -622,14 +598,13 @@ static int fc_disc_gpn_ft_parse(struct fc_disc *disc, void *buf, size_t len)
>>>  
>>>  		if (ids.port_id != fc_host_port_id(lport->host) &&
>>>  		    ids.port_name != lport->wwpn) {
>>> -			rport = lport->tt.rport_create(lport, &ids);
>>> -			if (rport) {
>>> -				rdata = rport->dd_data;
>>> +			rdata = lport->tt.rport_create(lport, &ids);
>>> +			if (rdata) {
>>>  				rdata->ops = &fc_disc_rport_ops;
>>>  				rdata->local_port = lport;
>>>  				list_add_tail(&rdata->peers,
>>>  					      &disc->rogue_rports);
>>> -				lport->tt.rport_login(rport);
>>> +				lport->tt.rport_login(rdata);
>>>  			} else
>>>  				printk(KERN_WARNING "libfc: Failed to allocate "
>>>  				       "memory for the newly discovered port "
>>> @@ -766,7 +741,6 @@ static void fc_disc_gpn_ft_resp(struct fc_seq *sp, struct fc_frame *fp,
>>>  static void fc_disc_single(struct fc_disc *disc, struct fc_disc_port *dp)
>>>  {
>>>  	struct fc_lport *lport;
>>> -	struct fc_rport *new_rport;
>>>  	struct fc_rport_priv *rdata;
>>>  
>>>  	lport = disc->lport;
>>> @@ -774,13 +748,12 @@ static void fc_disc_single(struct fc_disc *disc, struct fc_disc_port *dp)
>>>  	if (dp->ids.port_id == fc_host_port_id(lport->host))
>>>  		goto out;
>>>  
>>> -	new_rport = lport->tt.rport_create(lport, &dp->ids);
>>> -	if (new_rport) {
>>> -		rdata = new_rport->dd_data;
>>> +	rdata = lport->tt.rport_create(lport, &dp->ids);
>>> +	if (rdata) {
>>>  		rdata->ops = &fc_disc_rport_ops;
>>>  		kfree(dp);
>>>  		list_add_tail(&rdata->peers, &disc->rogue_rports);
>>> -		lport->tt.rport_login(new_rport);
>>> +		lport->tt.rport_login(rdata);
>>>  	}
>>>  	return;
>>>  out:
>>> diff --git a/drivers/scsi/libfc/fc_elsct.c b/drivers/scsi/libfc/fc_elsct.c
>>> index 5878b34..2b8a3bb 100644
>>> --- a/drivers/scsi/libfc/fc_elsct.c
>>> +++ b/drivers/scsi/libfc/fc_elsct.c
>>> @@ -32,7 +32,7 @@
>>>   * fc_elsct_send - sends ELS/CT frame
>>>   */
>>>  static struct fc_seq *fc_elsct_send(struct fc_lport *lport,
>>> -				    struct fc_rport *rport,
>>> +				    struct fc_rport_priv *rdata,
>>>  				    struct fc_frame *fp,
>>>  				    unsigned int op,
>>>  				    void (*resp)(struct fc_seq *,
>>> @@ -47,7 +47,7 @@ static struct fc_seq *fc_elsct_send(struct fc_lport *lport,
>>>  
>>>  	/* ELS requests */
>>>  	if ((op >= ELS_LS_RJT) && (op <= ELS_AUTH_ELS))
>>> -		rc = fc_els_fill(lport, rport, fp, op, &r_ctl, &did, &fh_type);
>>> +		rc = fc_els_fill(lport, rdata, fp, op, &r_ctl, &did, &fh_type);
>>>  	else
>>>  		/* CT requests */
>>>  		rc = fc_ct_fill(lport, fp, op, &r_ctl, &did, &fh_type);
>>> diff --git a/drivers/scsi/libfc/fc_fcp.c b/drivers/scsi/libfc/fc_fcp.c
>>> index 60e665a..5957631 100644
>>> --- a/drivers/scsi/libfc/fc_fcp.c
>>> +++ b/drivers/scsi/libfc/fc_fcp.c
>>> @@ -1310,7 +1310,7 @@ static void fc_fcp_rec(struct fc_fcp_pkt *fsp)
>>>  	fc_fill_fc_hdr(fp, FC_RCTL_ELS_REQ, rport->port_id,
>>>  		       fc_host_port_id(rp->local_port->host), FC_TYPE_ELS,
>>>  		       FC_FC_FIRST_SEQ | FC_FC_END_SEQ | FC_FC_SEQ_INIT, 0);
>>> -	if (lp->tt.elsct_send(lp, rport, fp, ELS_REC, fc_fcp_rec_resp,
>>> +	if (lp->tt.elsct_send(lp, rport->dd_data, fp, ELS_REC, fc_fcp_rec_resp,
>>>  			      fsp, jiffies_to_msecs(FC_SCSI_REC_TOV))) {
>>>  		fc_fcp_pkt_hold(fsp);		/* hold while REC outstanding */
>>>  		return;
>>> diff --git a/drivers/scsi/libfc/fc_lport.c b/drivers/scsi/libfc/fc_lport.c
>>> index bb83c89..fd69093 100644
>>> --- a/drivers/scsi/libfc/fc_lport.c
>>> +++ b/drivers/scsi/libfc/fc_lport.c
>>> @@ -134,16 +134,18 @@ static int fc_frame_drop(struct fc_lport *lport, struct fc_frame *fp)
>>>  /**
>>>   * fc_lport_rport_callback() - Event handler for rport events
>>>   * @lport: The lport which is receiving the event
>>> - * @rport: The rport which the event has occured on
>>> + * @rdata: private remote port data
>>>   * @event: The event that occured
>>>   *
>>>   * Locking Note: The rport lock should not be held when calling
>>>   *		 this function.
>>>   */
>>>  static void fc_lport_rport_callback(struct fc_lport *lport,
>>> -				    struct fc_rport *rport,
>>> +				    struct fc_rport_priv *rdata,
>>>  				    enum fc_rport_event event)
>>>  {
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>> +
>>>  	FC_LPORT_DBG(lport, "Received a %d event for port (%6x)\n", event,
>>>  		     rport->port_id);
>>>  
>>> @@ -152,7 +154,7 @@ static void fc_lport_rport_callback(struct fc_lport *lport,
>>>  		if (rport->port_id == FC_FID_DIR_SERV) {
>>>  			mutex_lock(&lport->lp_mutex);
>>>  			if (lport->state == LPORT_ST_DNS) {
>>> -				lport->dns_rp = rport;
>>> +				lport->dns_rp = rdata;
>>>  				fc_lport_enter_rpn_id(lport);
>>>  			} else {
>>>  				FC_LPORT_DBG(lport, "Received an CREATED event "
>>> @@ -161,7 +163,7 @@ static void fc_lport_rport_callback(struct fc_lport *lport,
>>>  					     "in the DNS state, it's in the "
>>>  					     "%d state", rport->port_id,
>>>  					     lport->state);
>>> -				lport->tt.rport_logoff(rport);
>>> +				lport->tt.rport_logoff(rdata);
>>>  			}
>>>  			mutex_unlock(&lport->lp_mutex);
>>>  		} else
>>> @@ -833,7 +835,7 @@ static void fc_lport_recv_req(struct fc_lport *lport, struct fc_seq *sp,
>>>  {
>>>  	struct fc_frame_header *fh = fc_frame_header_get(fp);
>>>  	void (*recv) (struct fc_seq *, struct fc_frame *, struct fc_lport *);
>>> -	struct fc_rport *rport;
>>> +	struct fc_rport_priv *rdata;
>>>  	u32 s_id;
>>>  	u32 d_id;
>>>  	struct fc_seq_els_data rjt_data;
>>> @@ -889,9 +891,9 @@ static void fc_lport_recv_req(struct fc_lport *lport, struct fc_seq *sp,
>>>  			s_id = ntoh24(fh->fh_s_id);
>>>  			d_id = ntoh24(fh->fh_d_id);
>>>  
>>> -			rport = lport->tt.rport_lookup(lport, s_id);
>>> -			if (rport)
>>> -				lport->tt.rport_recv_req(sp, fp, rport);
>>> +			rdata = lport->tt.rport_lookup(lport, s_id);
>>> +			if (rdata)
>>> +				lport->tt.rport_recv_req(sp, fp, rdata);
>>>  			else {
>>>  				rjt_data.fp = NULL;
>>>  				rjt_data.reason = ELS_RJT_UNAB;
>>> @@ -1305,7 +1307,6 @@ static struct fc_rport_operations fc_lport_rport_ops = {
>>>   */
>>>  static void fc_lport_enter_dns(struct fc_lport *lport)
>>>  {
>>> -	struct fc_rport *rport;
>>>  	struct fc_rport_priv *rdata;
>>>  	struct fc_rport_identifiers ids;
>>>  
>>> @@ -1319,13 +1320,12 @@ static void fc_lport_enter_dns(struct fc_lport *lport)
>>>  
>>>  	fc_lport_state_enter(lport, LPORT_ST_DNS);
>>>  
>>> -	rport = lport->tt.rport_create(lport, &ids);
>>> -	if (!rport)
>>> +	rdata = lport->tt.rport_create(lport, &ids);
>>> +	if (!rdata)
>>>  		goto err;
>>>  
>>> -	rdata = rport->dd_data;
>>>  	rdata->ops = &fc_lport_rport_ops;
>>> -	lport->tt.rport_login(rport);
>>> +	lport->tt.rport_login(rdata);
>>>  	return;
>>>  
>>>  err:
>>> diff --git a/drivers/scsi/libfc/fc_rport.c b/drivers/scsi/libfc/fc_rport.c
>>> index a8c37ab..ed4722b 100644
>>> --- a/drivers/scsi/libfc/fc_rport.c
>>> +++ b/drivers/scsi/libfc/fc_rport.c
>>> @@ -57,23 +57,23 @@
>>>  
>>>  struct workqueue_struct *rport_event_queue;
>>>  
>>> -static void fc_rport_enter_plogi(struct fc_rport *);
>>> -static void fc_rport_enter_prli(struct fc_rport *);
>>> -static void fc_rport_enter_rtv(struct fc_rport *);
>>> -static void fc_rport_enter_ready(struct fc_rport *);
>>> -static void fc_rport_enter_logo(struct fc_rport *);
>>> +static void fc_rport_enter_plogi(struct fc_rport_priv *);
>>> +static void fc_rport_enter_prli(struct fc_rport_priv *);
>>> +static void fc_rport_enter_rtv(struct fc_rport_priv *);
>>> +static void fc_rport_enter_ready(struct fc_rport_priv *);
>>> +static void fc_rport_enter_logo(struct fc_rport_priv *);
>>>  
>>> -static void fc_rport_recv_plogi_req(struct fc_rport *,
>>> +static void fc_rport_recv_plogi_req(struct fc_rport_priv *,
>>>  				    struct fc_seq *, struct fc_frame *);
>>> -static void fc_rport_recv_prli_req(struct fc_rport *,
>>> +static void fc_rport_recv_prli_req(struct fc_rport_priv *,
>>>  				   struct fc_seq *, struct fc_frame *);
>>> -static void fc_rport_recv_prlo_req(struct fc_rport *,
>>> +static void fc_rport_recv_prlo_req(struct fc_rport_priv *,
>>>  				   struct fc_seq *, struct fc_frame *);
>>> -static void fc_rport_recv_logo_req(struct fc_rport *,
>>> +static void fc_rport_recv_logo_req(struct fc_rport_priv *,
>>>  				   struct fc_seq *, struct fc_frame *);
>>>  static void fc_rport_timeout(struct work_struct *);
>>> -static void fc_rport_error(struct fc_rport *, struct fc_frame *);
>>> -static void fc_rport_error_retry(struct fc_rport *, struct fc_frame *);
>>> +static void fc_rport_error(struct fc_rport_priv *, struct fc_frame *);
>>> +static void fc_rport_error_retry(struct fc_rport_priv *, struct fc_frame *);
>>>  static void fc_rport_work(struct work_struct *);
>>>  
>>>  static const char *fc_rport_state_names[] = {
>>> @@ -89,12 +89,14 @@ static const char *fc_rport_state_names[] = {
>>>  static void fc_rport_rogue_destroy(struct device *dev)
>>>  {
>>>  	struct fc_rport *rport = dev_to_rport(dev);
>>> -	FC_RPORT_DBG(rport, "Destroying rogue rport\n");
>>> +	struct fc_rport_priv *rdata = RPORT_TO_PRIV(rport);
>>> +
>>> +	FC_RPORT_DBG(rdata, "Destroying rogue rport\n");
>>>  	kfree(rport);
>>>  }
>>>  
>>> -struct fc_rport *fc_rport_rogue_create(struct fc_lport *lport,
>>> -				       struct fc_rport_identifiers *ids)
>>> +struct fc_rport_priv *fc_rport_rogue_create(struct fc_lport *lport,
>>> +					    struct fc_rport_identifiers *ids)
>>>  {
>>>  	struct fc_rport *rport;
>>>  	struct fc_rport_priv *rdata;
>>> @@ -135,17 +137,16 @@ struct fc_rport *fc_rport_rogue_create(struct fc_lport *lport,
>>>  	 */
>>>  	INIT_LIST_HEAD(&rdata->peers);
>>>  
>>> -	return rport;
>>> +	return rdata;
>>>  }
>>>  
>>>  /**
>>>   * fc_rport_state() - return a string for the state the rport is in
>>> - * @rport: The rport whose state we want to get a string for
>>> + * @rdata: remote port private data
>>>   */
>>> -static const char *fc_rport_state(struct fc_rport *rport)
>>> +static const char *fc_rport_state(struct fc_rport_priv *rdata)
>>>  {
>>>  	const char *cp;
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>>  
>>>  	cp = fc_rport_state_names[rdata->rp_state];
>>>  	if (!cp)
>>> @@ -192,15 +193,14 @@ static unsigned int fc_plogi_get_maxframe(struct fc_els_flogi *flp,
>>>  
>>>  /**
>>>   * fc_rport_state_enter() - Change the rport's state
>>> - * @rport: The rport whose state should change
>>> + * @rdata: The rport whose state should change
>>>   * @new: The new state of the rport
>>>   *
>>>   * Locking Note: Called with the rport lock held
>>>   */
>>> -static void fc_rport_state_enter(struct fc_rport *rport,
>>> +static void fc_rport_state_enter(struct fc_rport_priv *rdata,
>>>  				 enum fc_rport_state new)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>>  	if (rdata->rp_state != new)
>>>  		rdata->retries = 0;
>>>  	rdata->rp_state = new;
>>> @@ -255,7 +255,7 @@ static void fc_rport_work(struct work_struct *work)
>>>  			INIT_LIST_HEAD(&new_rdata->peers);
>>>  			INIT_WORK(&new_rdata->event_work, fc_rport_work);
>>>  
>>> -			fc_rport_state_enter(new_rport, RPORT_ST_READY);
>>> +			fc_rport_state_enter(new_rdata, RPORT_ST_READY);
>>>  		} else {
>>>  			printk(KERN_WARNING "libfc: Failed to allocate "
>>>  			       " memory for rport (%6x)\n", ids.port_id);
>>> @@ -263,20 +263,20 @@ static void fc_rport_work(struct work_struct *work)
>>>  		}
>>>  		if (rport->port_id != FC_FID_DIR_SERV)
>>>  			if (rport_ops->event_callback)
>>> -				rport_ops->event_callback(lport, rport,
>>> +				rport_ops->event_callback(lport, rdata,
>>>  							  RPORT_EV_FAILED);
>>>  		put_device(&rport->dev);
>>>  		rport = new_rport;
>>>  		rdata = new_rport->dd_data;
>>>  		if (rport_ops->event_callback)
>>> -			rport_ops->event_callback(lport, rport, event);
>>> +			rport_ops->event_callback(lport, rdata, event);
>>>  	} else if ((event == RPORT_EV_FAILED) ||
>>>  		   (event == RPORT_EV_LOGO) ||
>>>  		   (event == RPORT_EV_STOP)) {
>>>  		trans_state = rdata->trans_state;
>>>  		mutex_unlock(&rdata->rp_mutex);
>>>  		if (rport_ops->event_callback)
>>> -			rport_ops->event_callback(lport, rport, event);
>>> +			rport_ops->event_callback(lport, rdata, event);
>>>  		cancel_delayed_work_sync(&rdata->retry_work);
>>>  		if (trans_state == FC_PORTSTATE_ROGUE)
>>>  			put_device(&rport->dev);
>>> @@ -292,21 +292,19 @@ static void fc_rport_work(struct work_struct *work)
>>>  
>>>  /**
>>>   * fc_rport_login() - Start the remote port login state machine
>>> - * @rport: Fibre Channel remote port
>>> + * @rdata: private remote port
>>>   *
>>>   * Locking Note: Called without the rport lock held. This
>>>   * function will hold the rport lock, call an _enter_*
>>>   * function and then unlock the rport.
>>>   */
>>> -int fc_rport_login(struct fc_rport *rport)
>>> +int fc_rport_login(struct fc_rport_priv *rdata)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> -
>>>  	mutex_lock(&rdata->rp_mutex);
>>>  
>>> -	FC_RPORT_DBG(rport, "Login to port\n");
>>> +	FC_RPORT_DBG(rdata, "Login to port\n");
>>>  
>>> -	fc_rport_enter_plogi(rport);
>>> +	fc_rport_enter_plogi(rdata);
>>>  
>>>  	mutex_unlock(&rdata->rp_mutex);
>>>  
>>> @@ -315,7 +313,7 @@ int fc_rport_login(struct fc_rport *rport)
>>>  
>>>  /**
>>>   * fc_rport_enter_delete() - schedule a remote port to be deleted.
>>> - * @rport: Fibre Channel remote port
>>> + * @rdata: private remote port
>>>   * @event: event to report as the reason for deletion
>>>   *
>>>   * Locking Note: Called with the rport lock held.
>>> @@ -327,17 +325,15 @@ int fc_rport_login(struct fc_rport *rport)
>>>   * Since we have the mutex, even if fc_rport_work() is already started,
>>>   * it'll see the new event.
>>>   */
>>> -static void fc_rport_enter_delete(struct fc_rport *rport,
>>> +static void fc_rport_enter_delete(struct fc_rport_priv *rdata,
>>>  				  enum fc_rport_event event)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> -
>>>  	if (rdata->rp_state == RPORT_ST_DELETE)
>>>  		return;
>>>  
>>> -	FC_RPORT_DBG(rport, "Delete port\n");
>>> +	FC_RPORT_DBG(rdata, "Delete port\n");
>>>  
>>> -	fc_rport_state_enter(rport, RPORT_ST_DELETE);
>>> +	fc_rport_state_enter(rdata, RPORT_ST_DELETE);
>>>  
>>>  	if (rdata->event == RPORT_EV_NONE)
>>>  		queue_work(rport_event_queue, &rdata->event_work);
>>> @@ -346,33 +342,31 @@ static void fc_rport_enter_delete(struct fc_rport *rport,
>>>  
>>>  /**
>>>   * fc_rport_logoff() - Logoff and remove an rport
>>> - * @rport: Fibre Channel remote port to be removed
>>> + * @rdata: private remote port
>>>   *
>>>   * Locking Note: Called without the rport lock held. This
>>>   * function will hold the rport lock, call an _enter_*
>>>   * function and then unlock the rport.
>>>   */
>>> -int fc_rport_logoff(struct fc_rport *rport)
>>> +int fc_rport_logoff(struct fc_rport_priv *rdata)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> -
>>>  	mutex_lock(&rdata->rp_mutex);
>>>  
>>> -	FC_RPORT_DBG(rport, "Remove port\n");
>>> +	FC_RPORT_DBG(rdata, "Remove port\n");
>>>  
>>>  	if (rdata->rp_state == RPORT_ST_DELETE) {
>>> -		FC_RPORT_DBG(rport, "Port in Delete state, not removing\n");
>>> +		FC_RPORT_DBG(rdata, "Port in Delete state, not removing\n");
>>>  		mutex_unlock(&rdata->rp_mutex);
>>>  		goto out;
>>>  	}
>>>  
>>> -	fc_rport_enter_logo(rport);
>>> +	fc_rport_enter_logo(rdata);
>>>  
>>>  	/*
>>>  	 * Change the state to Delete so that we discard
>>>  	 * the response.
>>>  	 */
>>> -	fc_rport_enter_delete(rport, RPORT_EV_STOP);
>>> +	fc_rport_enter_delete(rdata, RPORT_EV_STOP);
>>>  	mutex_unlock(&rdata->rp_mutex);
>>>  
>>>  out:
>>> @@ -381,18 +375,16 @@ out:
>>>  
>>>  /**
>>>   * fc_rport_enter_ready() - The rport is ready
>>> - * @rport: Fibre Channel remote port that is ready
>>> + * @rdata: private remote port
>>>   *
>>>   * Locking Note: The rport lock is expected to be held before calling
>>>   * this routine.
>>>   */
>>> -static void fc_rport_enter_ready(struct fc_rport *rport)
>>> +static void fc_rport_enter_ready(struct fc_rport_priv *rdata)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> +	fc_rport_state_enter(rdata, RPORT_ST_READY);
>>>  
>>> -	fc_rport_state_enter(rport, RPORT_ST_READY);
>>> -
>>> -	FC_RPORT_DBG(rport, "Port is Ready\n");
>>> +	FC_RPORT_DBG(rdata, "Port is Ready\n");
>>>  
>>>  	if (rdata->event == RPORT_EV_NONE)
>>>  		queue_work(rport_event_queue, &rdata->event_work);
>>> @@ -411,22 +403,21 @@ static void fc_rport_timeout(struct work_struct *work)
>>>  {
>>>  	struct fc_rport_priv *rdata =
>>>  		container_of(work, struct fc_rport_priv, retry_work.work);
>>> -	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  
>>>  	mutex_lock(&rdata->rp_mutex);
>>>  
>>>  	switch (rdata->rp_state) {
>>>  	case RPORT_ST_PLOGI:
>>> -		fc_rport_enter_plogi(rport);
>>> +		fc_rport_enter_plogi(rdata);
>>>  		break;
>>>  	case RPORT_ST_PRLI:
>>> -		fc_rport_enter_prli(rport);
>>> +		fc_rport_enter_prli(rdata);
>>>  		break;
>>>  	case RPORT_ST_RTV:
>>> -		fc_rport_enter_rtv(rport);
>>> +		fc_rport_enter_rtv(rdata);
>>>  		break;
>>>  	case RPORT_ST_LOGO:
>>> -		fc_rport_enter_logo(rport);
>>> +		fc_rport_enter_logo(rdata);
>>>  		break;
>>>  	case RPORT_ST_READY:
>>>  	case RPORT_ST_INIT:
>>> @@ -439,27 +430,25 @@ static void fc_rport_timeout(struct work_struct *work)
>>>  
>>>  /**
>>>   * fc_rport_error() - Error handler, called once retries have been exhausted
>>> - * @rport: The fc_rport object
>>> + * @rdata: private remote port
>>>   * @fp: The frame pointer
>>>   *
>>>   * Locking Note: The rport lock is expected to be held before
>>>   * calling this routine
>>>   */
>>> -static void fc_rport_error(struct fc_rport *rport, struct fc_frame *fp)
>>> +static void fc_rport_error(struct fc_rport_priv *rdata, struct fc_frame *fp)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> -
>>> -	FC_RPORT_DBG(rport, "Error %ld in state %s, retries %d\n",
>>> -		     PTR_ERR(fp), fc_rport_state(rport), rdata->retries);
>>> +	FC_RPORT_DBG(rdata, "Error %ld in state %s, retries %d\n",
>>> +		     PTR_ERR(fp), fc_rport_state(rdata), rdata->retries);
>>>  
>>>  	switch (rdata->rp_state) {
>>>  	case RPORT_ST_PLOGI:
>>>  	case RPORT_ST_PRLI:
>>>  	case RPORT_ST_LOGO:
>>> -		fc_rport_enter_delete(rport, RPORT_EV_FAILED);
>>> +		fc_rport_enter_delete(rdata, RPORT_EV_FAILED);
>>>  		break;
>>>  	case RPORT_ST_RTV:
>>> -		fc_rport_enter_ready(rport);
>>> +		fc_rport_enter_ready(rdata);
>>>  		break;
>>>  	case RPORT_ST_DELETE:
>>>  	case RPORT_ST_READY:
>>> @@ -470,7 +459,7 @@ static void fc_rport_error(struct fc_rport *rport, struct fc_frame *fp)
>>>  
>>>  /**
>>>   * fc_rport_error_retry() - Error handler when retries are desired
>>> - * @rport: The fc_rport object
>>> + * @rdata: private remote port data
>>>   * @fp: The frame pointer
>>>   *
>>>   * If the error was an exchange timeout retry immediately,
>>> @@ -479,18 +468,18 @@ static void fc_rport_error(struct fc_rport *rport, struct fc_frame *fp)
>>>   * Locking Note: The rport lock is expected to be held before
>>>   * calling this routine
>>>   */
>>> -static void fc_rport_error_retry(struct fc_rport *rport, struct fc_frame *fp)
>>> +static void fc_rport_error_retry(struct fc_rport_priv *rdata,
>>> +				 struct fc_frame *fp)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>>  	unsigned long delay = FC_DEF_E_D_TOV;
>>>  
>>>  	/* make sure this isn't an FC_EX_CLOSED error, never retry those */
>>>  	if (PTR_ERR(fp) == -FC_EX_CLOSED)
>>> -		return fc_rport_error(rport, fp);
>>> +		return fc_rport_error(rdata, fp);
>>>  
>>>  	if (rdata->retries < rdata->local_port->max_rport_retry_count) {
>>> -		FC_RPORT_DBG(rport, "Error %ld in state %s, retrying\n",
>>> -			     PTR_ERR(fp), fc_rport_state(rport));
>>> +		FC_RPORT_DBG(rdata, "Error %ld in state %s, retrying\n",
>>> +			     PTR_ERR(fp), fc_rport_state(rdata));
>>>  		rdata->retries++;
>>>  		/* no additional delay on exchange timeouts */
>>>  		if (PTR_ERR(fp) == -FC_EX_TIMEOUT)
>>> @@ -499,24 +488,24 @@ static void fc_rport_error_retry(struct fc_rport *rport, struct fc_frame *fp)
>>>  		return;
>>>  	}
>>>  
>>> -	return fc_rport_error(rport, fp);
>>> +	return fc_rport_error(rdata, fp);
>>>  }
>>>  
>>>  /**
>>>   * fc_rport_plogi_recv_resp() - Handle incoming ELS PLOGI response
>>>   * @sp: current sequence in the PLOGI exchange
>>>   * @fp: response frame
>>> - * @rp_arg: Fibre Channel remote port
>>> + * @rdata_arg: private remote port data
>>>   *
>>>   * Locking Note: This function will be called without the rport lock
>>>   * held, but it will lock, call an _enter_* function or fc_rport_error
>>>   * and then unlock the rport.
>>>   */
>>>  static void fc_rport_plogi_resp(struct fc_seq *sp, struct fc_frame *fp,
>>> -				void *rp_arg)
>>> +				void *rdata_arg)
>>>  {
>>> -	struct fc_rport *rport = rp_arg;
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> +	struct fc_rport_priv *rdata = rdata_arg;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  	struct fc_lport *lport = rdata->local_port;
>>>  	struct fc_els_flogi *plp = NULL;
>>>  	unsigned int tov;
>>> @@ -526,18 +515,18 @@ static void fc_rport_plogi_resp(struct fc_seq *sp, struct fc_frame *fp,
>>>  
>>>  	mutex_lock(&rdata->rp_mutex);
>>>  
>>> -	FC_RPORT_DBG(rport, "Received a PLOGI response\n");
>>> +	FC_RPORT_DBG(rdata, "Received a PLOGI response\n");
>>>  
>>>  	if (rdata->rp_state != RPORT_ST_PLOGI) {
>>> -		FC_RPORT_DBG(rport, "Received a PLOGI response, but in state "
>>> -			     "%s\n", fc_rport_state(rport));
>>> +		FC_RPORT_DBG(rdata, "Received a PLOGI response, but in state "
>>> +			     "%s\n", fc_rport_state(rdata));
>>>  		if (IS_ERR(fp))
>>>  			goto err;
>>>  		goto out;
>>>  	}
>>>  
>>>  	if (IS_ERR(fp)) {
>>> -		fc_rport_error_retry(rport, fp);
>>> +		fc_rport_error_retry(rdata, fp);
>>>  		goto err;
>>>  	}
>>>  
>>> @@ -565,11 +554,11 @@ static void fc_rport_plogi_resp(struct fc_seq *sp, struct fc_frame *fp,
>>>  		 * we skip PRLI and RTV and go straight to READY.
>>>  		 */
>>>  		if (rport->port_id >= FC_FID_DOM_MGR)
>>> -			fc_rport_enter_ready(rport);
>>> +			fc_rport_enter_ready(rdata);
>>>  		else
>>> -			fc_rport_enter_prli(rport);
>>> +			fc_rport_enter_prli(rdata);
>>>  	} else
>>> -		fc_rport_error_retry(rport, fp);
>>> +		fc_rport_error_retry(rdata, fp);
>>>  
>>>  out:
>>>  	fc_frame_free(fp);
>>> @@ -580,33 +569,33 @@ err:
>>>  
>>>  /**
>>>   * fc_rport_enter_plogi() - Send Port Login (PLOGI) request to peer
>>> - * @rport: Fibre Channel remote port to send PLOGI to
>>> + * @rdata: private remote port data
>>>   *
>>>   * Locking Note: The rport lock is expected to be held before calling
>>>   * this routine.
>>>   */
>>> -static void fc_rport_enter_plogi(struct fc_rport *rport)
>>> +static void fc_rport_enter_plogi(struct fc_rport_priv *rdata)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>>  	struct fc_lport *lport = rdata->local_port;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  	struct fc_frame *fp;
>>>  
>>> -	FC_RPORT_DBG(rport, "Port entered PLOGI state from %s state\n",
>>> -		     fc_rport_state(rport));
>>> +	FC_RPORT_DBG(rdata, "Port entered PLOGI state from %s state\n",
>>> +		     fc_rport_state(rdata));
>>>  
>>> -	fc_rport_state_enter(rport, RPORT_ST_PLOGI);
>>> +	fc_rport_state_enter(rdata, RPORT_ST_PLOGI);
>>>  
>>>  	rport->maxframe_size = FC_MIN_MAX_PAYLOAD;
>>>  	fp = fc_frame_alloc(lport, sizeof(struct fc_els_flogi));
>>>  	if (!fp) {
>>> -		fc_rport_error_retry(rport, fp);
>>> +		fc_rport_error_retry(rdata, fp);
>>>  		return;
>>>  	}
>>>  	rdata->e_d_tov = lport->e_d_tov;
>>>  
>>> -	if (!lport->tt.elsct_send(lport, rport, fp, ELS_PLOGI,
>>> -				  fc_rport_plogi_resp, rport, lport->e_d_tov))
>>> -		fc_rport_error_retry(rport, fp);
>>> +	if (!lport->tt.elsct_send(lport, rdata, fp, ELS_PLOGI,
>>> +				  fc_rport_plogi_resp, rdata, lport->e_d_tov))
>>> +		fc_rport_error_retry(rdata, fp);
>>>  	else
>>>  		get_device(&rport->dev);
>>>  }
>>> @@ -615,17 +604,17 @@ static void fc_rport_enter_plogi(struct fc_rport *rport)
>>>   * fc_rport_prli_resp() - Process Login (PRLI) response handler
>>>   * @sp: current sequence in the PRLI exchange
>>>   * @fp: response frame
>>> - * @rp_arg: Fibre Channel remote port
>>> + * @rdata_arg: private remote port data
>>>   *
>>>   * Locking Note: This function will be called without the rport lock
>>>   * held, but it will lock, call an _enter_* function or fc_rport_error
>>>   * and then unlock the rport.
>>>   */
>>>  static void fc_rport_prli_resp(struct fc_seq *sp, struct fc_frame *fp,
>>> -			       void *rp_arg)
>>> +			       void *rdata_arg)
>>>  {
>>> -	struct fc_rport *rport = rp_arg;
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> +	struct fc_rport_priv *rdata = rdata_arg;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  	struct {
>>>  		struct fc_els_prli prli;
>>>  		struct fc_els_spp spp;
>>> @@ -636,18 +625,18 @@ static void fc_rport_prli_resp(struct fc_seq *sp, struct fc_frame *fp,
>>>  
>>>  	mutex_lock(&rdata->rp_mutex);
>>>  
>>> -	FC_RPORT_DBG(rport, "Received a PRLI response\n");
>>> +	FC_RPORT_DBG(rdata, "Received a PRLI response\n");
>>>  
>>>  	if (rdata->rp_state != RPORT_ST_PRLI) {
>>> -		FC_RPORT_DBG(rport, "Received a PRLI response, but in state "
>>> -			     "%s\n", fc_rport_state(rport));
>>> +		FC_RPORT_DBG(rdata, "Received a PRLI response, but in state "
>>> +			     "%s\n", fc_rport_state(rdata));
>>>  		if (IS_ERR(fp))
>>>  			goto err;
>>>  		goto out;
>>>  	}
>>>  
>>>  	if (IS_ERR(fp)) {
>>> -		fc_rport_error_retry(rport, fp);
>>> +		fc_rport_error_retry(rdata, fp);
>>>  		goto err;
>>>  	}
>>>  
>>> @@ -667,11 +656,11 @@ static void fc_rport_prli_resp(struct fc_seq *sp, struct fc_frame *fp,
>>>  			roles |= FC_RPORT_ROLE_FCP_TARGET;
>>>  
>>>  		rport->roles = roles;
>>> -		fc_rport_enter_rtv(rport);
>>> +		fc_rport_enter_rtv(rdata);
>>>  
>>>  	} else {
>>> -		FC_RPORT_DBG(rport, "Bad ELS response for PRLI command\n");
>>> -		fc_rport_enter_delete(rport, RPORT_EV_FAILED);
>>> +		FC_RPORT_DBG(rdata, "Bad ELS response for PRLI command\n");
>>> +		fc_rport_enter_delete(rdata, RPORT_EV_FAILED);
>>>  	}
>>>  
>>>  out:
>>> @@ -685,42 +674,42 @@ err:
>>>   * fc_rport_logo_resp() - Logout (LOGO) response handler
>>>   * @sp: current sequence in the LOGO exchange
>>>   * @fp: response frame
>>> - * @rp_arg: Fibre Channel remote port
>>> + * @rdata_arg: private remote port data
>>>   *
>>>   * Locking Note: This function will be called without the rport lock
>>>   * held, but it will lock, call an _enter_* function or fc_rport_error
>>>   * and then unlock the rport.
>>>   */
>>>  static void fc_rport_logo_resp(struct fc_seq *sp, struct fc_frame *fp,
>>> -			       void *rp_arg)
>>> +			       void *rdata_arg)
>>>  {
>>> -	struct fc_rport *rport = rp_arg;
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> +	struct fc_rport_priv *rdata = rdata_arg;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  	u8 op;
>>>  
>>>  	mutex_lock(&rdata->rp_mutex);
>>>  
>>> -	FC_RPORT_DBG(rport, "Received a LOGO response\n");
>>> +	FC_RPORT_DBG(rdata, "Received a LOGO response\n");
>>>  
>>>  	if (rdata->rp_state != RPORT_ST_LOGO) {
>>> -		FC_RPORT_DBG(rport, "Received a LOGO response, but in state "
>>> -			     "%s\n", fc_rport_state(rport));
>>> +		FC_RPORT_DBG(rdata, "Received a LOGO response, but in state "
>>> +			     "%s\n", fc_rport_state(rdata));
>>>  		if (IS_ERR(fp))
>>>  			goto err;
>>>  		goto out;
>>>  	}
>>>  
>>>  	if (IS_ERR(fp)) {
>>> -		fc_rport_error_retry(rport, fp);
>>> +		fc_rport_error_retry(rdata, fp);
>>>  		goto err;
>>>  	}
>>>  
>>>  	op = fc_frame_payload_op(fp);
>>>  	if (op == ELS_LS_ACC) {
>>> -		fc_rport_enter_rtv(rport);
>>> +		fc_rport_enter_rtv(rdata);
>>>  	} else {
>>> -		FC_RPORT_DBG(rport, "Bad ELS response for LOGO command\n");
>>> -		fc_rport_enter_delete(rport, RPORT_EV_LOGO);
>>> +		FC_RPORT_DBG(rdata, "Bad ELS response for LOGO command\n");
>>> +		fc_rport_enter_delete(rdata, RPORT_EV_LOGO);
>>>  	}
>>>  
>>>  out:
>>> @@ -732,14 +721,14 @@ err:
>>>  
>>>  /**
>>>   * fc_rport_enter_prli() - Send Process Login (PRLI) request to peer
>>> - * @rport: Fibre Channel remote port to send PRLI to
>>> + * @rdata: private remote port data
>>>   *
>>>   * Locking Note: The rport lock is expected to be held before calling
>>>   * this routine.
>>>   */
>>> -static void fc_rport_enter_prli(struct fc_rport *rport)
>>> +static void fc_rport_enter_prli(struct fc_rport_priv *rdata)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  	struct fc_lport *lport = rdata->local_port;
>>>  	struct {
>>>  		struct fc_els_prli prli;
>>> @@ -747,20 +736,20 @@ static void fc_rport_enter_prli(struct fc_rport *rport)
>>>  	} *pp;
>>>  	struct fc_frame *fp;
>>>  
>>> -	FC_RPORT_DBG(rport, "Port entered PRLI state from %s state\n",
>>> -		     fc_rport_state(rport));
>>> +	FC_RPORT_DBG(rdata, "Port entered PRLI state from %s state\n",
>>> +		     fc_rport_state(rdata));
>>>  
>>> -	fc_rport_state_enter(rport, RPORT_ST_PRLI);
>>> +	fc_rport_state_enter(rdata, RPORT_ST_PRLI);
>>>  
>>>  	fp = fc_frame_alloc(lport, sizeof(*pp));
>>>  	if (!fp) {
>>> -		fc_rport_error_retry(rport, fp);
>>> +		fc_rport_error_retry(rdata, fp);
>>>  		return;
>>>  	}
>>>  
>>> -	if (!lport->tt.elsct_send(lport, rport, fp, ELS_PRLI,
>>> -				  fc_rport_prli_resp, rport, lport->e_d_tov))
>>> -		fc_rport_error_retry(rport, fp);
>>> +	if (!lport->tt.elsct_send(lport, rdata, fp, ELS_PRLI,
>>> +				  fc_rport_prli_resp, rdata, lport->e_d_tov))
>>> +		fc_rport_error_retry(rdata, fp);
>>>  	else
>>>  		get_device(&rport->dev);
>>>  }
>>> @@ -769,7 +758,7 @@ static void fc_rport_enter_prli(struct fc_rport *rport)
>>>   * fc_rport_els_rtv_resp() - Request Timeout Value response handler
>>>   * @sp: current sequence in the RTV exchange
>>>   * @fp: response frame
>>> - * @rp_arg: Fibre Channel remote port
>>> + * @rdata_arg: private remote port data
>>>   *
>>>   * Many targets don't seem to support this.
>>>   *
>>> @@ -778,26 +767,26 @@ static void fc_rport_enter_prli(struct fc_rport *rport)
>>>   * and then unlock the rport.
>>>   */
>>>  static void fc_rport_rtv_resp(struct fc_seq *sp, struct fc_frame *fp,
>>> -			      void *rp_arg)
>>> +			      void *rdata_arg)
>>>  {
>>> -	struct fc_rport *rport = rp_arg;
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> +	struct fc_rport_priv *rdata = rdata_arg;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  	u8 op;
>>>  
>>>  	mutex_lock(&rdata->rp_mutex);
>>>  
>>> -	FC_RPORT_DBG(rport, "Received a RTV response\n");
>>> +	FC_RPORT_DBG(rdata, "Received a RTV response\n");
>>>  
>>>  	if (rdata->rp_state != RPORT_ST_RTV) {
>>> -		FC_RPORT_DBG(rport, "Received a RTV response, but in state "
>>> -			     "%s\n", fc_rport_state(rport));
>>> +		FC_RPORT_DBG(rdata, "Received a RTV response, but in state "
>>> +			     "%s\n", fc_rport_state(rdata));
>>>  		if (IS_ERR(fp))
>>>  			goto err;
>>>  		goto out;
>>>  	}
>>>  
>>>  	if (IS_ERR(fp)) {
>>> -		fc_rport_error(rport, fp);
>>> +		fc_rport_error(rdata, fp);
>>>  		goto err;
>>>  	}
>>>  
>>> @@ -823,7 +812,7 @@ static void fc_rport_rtv_resp(struct fc_seq *sp, struct fc_frame *fp,
>>>  		}
>>>  	}
>>>  
>>> -	fc_rport_enter_ready(rport);
>>> +	fc_rport_enter_ready(rdata);
>>>  
>>>  out:
>>>  	fc_frame_free(fp);
>>> @@ -834,62 +823,62 @@ err:
>>>  
>>>  /**
>>>   * fc_rport_enter_rtv() - Send Request Timeout Value (RTV) request to peer
>>> - * @rport: Fibre Channel remote port to send RTV to
>>> + * @rdata: private remote port data
>>>   *
>>>   * Locking Note: The rport lock is expected to be held before calling
>>>   * this routine.
>>>   */
>>> -static void fc_rport_enter_rtv(struct fc_rport *rport)
>>> +static void fc_rport_enter_rtv(struct fc_rport_priv *rdata)
>>>  {
>>>  	struct fc_frame *fp;
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>>  	struct fc_lport *lport = rdata->local_port;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  
>>> -	FC_RPORT_DBG(rport, "Port entered RTV state from %s state\n",
>>> -		     fc_rport_state(rport));
>>> +	FC_RPORT_DBG(rdata, "Port entered RTV state from %s state\n",
>>> +		     fc_rport_state(rdata));
>>>  
>>> -	fc_rport_state_enter(rport, RPORT_ST_RTV);
>>> +	fc_rport_state_enter(rdata, RPORT_ST_RTV);
>>>  
>>>  	fp = fc_frame_alloc(lport, sizeof(struct fc_els_rtv));
>>>  	if (!fp) {
>>> -		fc_rport_error_retry(rport, fp);
>>> +		fc_rport_error_retry(rdata, fp);
>>>  		return;
>>>  	}
>>>  
>>> -	if (!lport->tt.elsct_send(lport, rport, fp, ELS_RTV,
>>> -				     fc_rport_rtv_resp, rport, lport->e_d_tov))
>>> -		fc_rport_error_retry(rport, fp);
>>> +	if (!lport->tt.elsct_send(lport, rdata, fp, ELS_RTV,
>>> +				     fc_rport_rtv_resp, rdata, lport->e_d_tov))
>>> +		fc_rport_error_retry(rdata, fp);
>>>  	else
>>>  		get_device(&rport->dev);
>>>  }
>>>  
>>>  /**
>>>   * fc_rport_enter_logo() - Send Logout (LOGO) request to peer
>>> - * @rport: Fibre Channel remote port to send LOGO to
>>> + * @rdata: private remote port data
>>>   *
>>>   * Locking Note: The rport lock is expected to be held before calling
>>>   * this routine.
>>>   */
>>> -static void fc_rport_enter_logo(struct fc_rport *rport)
>>> +static void fc_rport_enter_logo(struct fc_rport_priv *rdata)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>>  	struct fc_lport *lport = rdata->local_port;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  	struct fc_frame *fp;
>>>  
>>> -	FC_RPORT_DBG(rport, "Port entered LOGO state from %s state\n",
>>> -		     fc_rport_state(rport));
>>> +	FC_RPORT_DBG(rdata, "Port entered LOGO state from %s state\n",
>>> +		     fc_rport_state(rdata));
>>>  
>>> -	fc_rport_state_enter(rport, RPORT_ST_LOGO);
>>> +	fc_rport_state_enter(rdata, RPORT_ST_LOGO);
>>>  
>>>  	fp = fc_frame_alloc(lport, sizeof(struct fc_els_logo));
>>>  	if (!fp) {
>>> -		fc_rport_error_retry(rport, fp);
>>> +		fc_rport_error_retry(rdata, fp);
>>>  		return;
>>>  	}
>>>  
>>> -	if (!lport->tt.elsct_send(lport, rport, fp, ELS_LOGO,
>>> -				  fc_rport_logo_resp, rport, lport->e_d_tov))
>>> -		fc_rport_error_retry(rport, fp);
>>> +	if (!lport->tt.elsct_send(lport, rdata, fp, ELS_LOGO,
>>> +				  fc_rport_logo_resp, rdata, lport->e_d_tov))
>>> +		fc_rport_error_retry(rdata, fp);
>>>  	else
>>>  		get_device(&rport->dev);
>>>  }
>>> @@ -899,16 +888,15 @@ static void fc_rport_enter_logo(struct fc_rport *rport)
>>>   * fc_rport_recv_req() - Receive a request from a rport
>>>   * @sp: current sequence in the PLOGI exchange
>>>   * @fp: response frame
>>> - * @rp_arg: Fibre Channel remote port
>>> + * @rdata_arg: private remote port data
>>>   *
>>>   * Locking Note: Called without the rport lock held. This
>>>   * function will hold the rport lock, call an _enter_*
>>>   * function and then unlock the rport.
>>>   */
>>>  void fc_rport_recv_req(struct fc_seq *sp, struct fc_frame *fp,
>>> -		       struct fc_rport *rport)
>>> +		       struct fc_rport_priv *rdata)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>>  	struct fc_lport *lport = rdata->local_port;
>>>  
>>>  	struct fc_frame_header *fh;
>>> @@ -927,16 +915,16 @@ void fc_rport_recv_req(struct fc_seq *sp, struct fc_frame *fp,
>>>  		op = fc_frame_payload_op(fp);
>>>  		switch (op) {
>>>  		case ELS_PLOGI:
>>> -			fc_rport_recv_plogi_req(rport, sp, fp);
>>> +			fc_rport_recv_plogi_req(rdata, sp, fp);
>>>  			break;
>>>  		case ELS_PRLI:
>>> -			fc_rport_recv_prli_req(rport, sp, fp);
>>> +			fc_rport_recv_prli_req(rdata, sp, fp);
>>>  			break;
>>>  		case ELS_PRLO:
>>> -			fc_rport_recv_prlo_req(rport, sp, fp);
>>> +			fc_rport_recv_prlo_req(rdata, sp, fp);
>>>  			break;
>>>  		case ELS_LOGO:
>>> -			fc_rport_recv_logo_req(rport, sp, fp);
>>> +			fc_rport_recv_logo_req(rdata, sp, fp);
>>>  			break;
>>>  		case ELS_RRQ:
>>>  			els_data.fp = fp;
>>> @@ -958,17 +946,17 @@ void fc_rport_recv_req(struct fc_seq *sp, struct fc_frame *fp,
>>>  
>>>  /**
>>>   * fc_rport_recv_plogi_req() - Handle incoming Port Login (PLOGI) request
>>> - * @rport: Fibre Channel remote port that initiated PLOGI
>>> + * @rdata: private remote port data
>>>   * @sp: current sequence in the PLOGI exchange
>>>   * @fp: PLOGI request frame
>>>   *
>>>   * Locking Note: The rport lock is exected to be held before calling
>>>   * this function.
>>>   */
>>> -static void fc_rport_recv_plogi_req(struct fc_rport *rport,
>>> +static void fc_rport_recv_plogi_req(struct fc_rport_priv *rdata,
>>>  				    struct fc_seq *sp, struct fc_frame *rx_fp)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  	struct fc_lport *lport = rdata->local_port;
>>>  	struct fc_frame *fp = rx_fp;
>>>  	struct fc_exch *ep;
>>> @@ -984,13 +972,13 @@ static void fc_rport_recv_plogi_req(struct fc_rport *rport,
>>>  
>>>  	fh = fc_frame_header_get(fp);
>>>  
>>> -	FC_RPORT_DBG(rport, "Received PLOGI request while in state %s\n",
>>> -		     fc_rport_state(rport));
>>> +	FC_RPORT_DBG(rdata, "Received PLOGI request while in state %s\n",
>>> +		     fc_rport_state(rdata));
>>>  
>>>  	sid = ntoh24(fh->fh_s_id);
>>>  	pl = fc_frame_payload_get(fp, sizeof(*pl));
>>>  	if (!pl) {
>>> -		FC_RPORT_DBG(rport, "Received PLOGI too short\n");
>>> +		FC_RPORT_DBG(rdata, "Received PLOGI too short\n");
>>>  		WARN_ON(1);
>>>  		/* XXX TBD: send reject? */
>>>  		fc_frame_free(fp);
>>> @@ -1012,25 +1000,25 @@ static void fc_rport_recv_plogi_req(struct fc_rport *rport,
>>>  	 */
>>>  	switch (rdata->rp_state) {
>>>  	case RPORT_ST_INIT:
>>> -		FC_RPORT_DBG(rport, "Received PLOGI, wwpn %llx state INIT "
>>> +		FC_RPORT_DBG(rdata, "Received PLOGI, wwpn %llx state INIT "
>>>  			     "- reject\n", (unsigned long long)wwpn);
>>>  		reject = ELS_RJT_UNSUP;
>>>  		break;
>>>  	case RPORT_ST_PLOGI:
>>> -		FC_RPORT_DBG(rport, "Received PLOGI in PLOGI state %d\n",
>>> +		FC_RPORT_DBG(rdata, "Received PLOGI in PLOGI state %d\n",
>>>  			     rdata->rp_state);
>>>  		if (wwpn < lport->wwpn)
>>>  			reject = ELS_RJT_INPROG;
>>>  		break;
>>>  	case RPORT_ST_PRLI:
>>>  	case RPORT_ST_READY:
>>> -		FC_RPORT_DBG(rport, "Received PLOGI in logged-in state %d "
>>> +		FC_RPORT_DBG(rdata, "Received PLOGI in logged-in state %d "
>>>  			     "- ignored for now\n", rdata->rp_state);
>>>  		/* XXX TBD - should reset */
>>>  		break;
>>>  	case RPORT_ST_DELETE:
>>>  	default:
>>> -		FC_RPORT_DBG(rport, "Received PLOGI in unexpected "
>>> +		FC_RPORT_DBG(rdata, "Received PLOGI in unexpected "
>>>  			     "state %d\n", rdata->rp_state);
>>>  		fc_frame_free(fp);
>>>  		return;
>>> @@ -1074,24 +1062,24 @@ static void fc_rport_recv_plogi_req(struct fc_rport *rport,
>>>  				       FC_TYPE_ELS, f_ctl, 0);
>>>  			lport->tt.seq_send(lport, sp, fp);
>>>  			if (rdata->rp_state == RPORT_ST_PLOGI)
>>> -				fc_rport_enter_prli(rport);
>>> +				fc_rport_enter_prli(rdata);
>>>  		}
>>>  	}
>>>  }
>>>  
>>>  /**
>>>   * fc_rport_recv_prli_req() - Handle incoming Process Login (PRLI) request
>>> - * @rport: Fibre Channel remote port that initiated PRLI
>>> + * @rdata: private remote port data
>>>   * @sp: current sequence in the PRLI exchange
>>>   * @fp: PRLI request frame
>>>   *
>>>   * Locking Note: The rport lock is exected to be held before calling
>>>   * this function.
>>>   */
>>> -static void fc_rport_recv_prli_req(struct fc_rport *rport,
>>> +static void fc_rport_recv_prli_req(struct fc_rport_priv *rdata,
>>>  				   struct fc_seq *sp, struct fc_frame *rx_fp)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>>  	struct fc_lport *lport = rdata->local_port;
>>>  	struct fc_exch *ep;
>>>  	struct fc_frame *fp;
>>> @@ -1115,8 +1103,8 @@ static void fc_rport_recv_prli_req(struct fc_rport *rport,
>>>  
>>>  	fh = fc_frame_header_get(rx_fp);
>>>  
>>> -	FC_RPORT_DBG(rport, "Received PRLI request while in state %s\n",
>>> -		     fc_rport_state(rport));
>>> +	FC_RPORT_DBG(rdata, "Received PRLI request while in state %s\n",
>>> +		     fc_rport_state(rdata));
>>>  
>>>  	switch (rdata->rp_state) {
>>>  	case RPORT_ST_PRLI:
>>> @@ -1220,7 +1208,7 @@ static void fc_rport_recv_prli_req(struct fc_rport *rport,
>>>  		 */
>>>  		switch (rdata->rp_state) {
>>>  		case RPORT_ST_PRLI:
>>> -			fc_rport_enter_ready(rport);
>>> +			fc_rport_enter_ready(rdata);
>>>  			break;
>>>  		case RPORT_ST_READY:
>>>  			break;
>>> @@ -1233,17 +1221,17 @@ static void fc_rport_recv_prli_req(struct fc_rport *rport,
>>>  
>>>  /**
>>>   * fc_rport_recv_prlo_req() - Handle incoming Process Logout (PRLO) request
>>> - * @rport: Fibre Channel remote port that initiated PRLO
>>> + * @rdata: private remote port data
>>>   * @sp: current sequence in the PRLO exchange
>>>   * @fp: PRLO request frame
>>>   *
>>>   * Locking Note: The rport lock is exected to be held before calling
>>>   * this function.
>>>   */
>>> -static void fc_rport_recv_prlo_req(struct fc_rport *rport, struct fc_seq *sp,
>>> +static void fc_rport_recv_prlo_req(struct fc_rport_priv *rdata,
>>> +				   struct fc_seq *sp,
>>>  				   struct fc_frame *fp)
>>>  {
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>>  	struct fc_lport *lport = rdata->local_port;
>>>  
>>>  	struct fc_frame_header *fh;
>>> @@ -1251,8 +1239,8 @@ static void fc_rport_recv_prlo_req(struct fc_rport *rport, struct fc_seq *sp,
>>>  
>>>  	fh = fc_frame_header_get(fp);
>>>  
>>> -	FC_RPORT_DBG(rport, "Received PRLO request while in state %s\n",
>>> -		     fc_rport_state(rport));
>>> +	FC_RPORT_DBG(rdata, "Received PRLO request while in state %s\n",
>>> +		     fc_rport_state(rdata));
>>>  
>>>  	if (rdata->rp_state == RPORT_ST_DELETE) {
>>>  		fc_frame_free(fp);
>>> @@ -1268,24 +1256,24 @@ static void fc_rport_recv_prlo_req(struct fc_rport *rport, struct fc_seq *sp,
>>>  
>>>  /**
>>>   * fc_rport_recv_logo_req() - Handle incoming Logout (LOGO) request
>>> - * @rport: Fibre Channel remote port that initiated LOGO
>>> + * @rdata: private remote port data
>>>   * @sp: current sequence in the LOGO exchange
>>>   * @fp: LOGO request frame
>>>   *
>>>   * Locking Note: The rport lock is exected to be held before calling
>>>   * this function.
>>>   */
>>> -static void fc_rport_recv_logo_req(struct fc_rport *rport, struct fc_seq *sp,
>>> +static void fc_rport_recv_logo_req(struct fc_rport_priv *rdata,
>>> +				   struct fc_seq *sp,
>>>  				   struct fc_frame *fp)
>>>  {
>>>  	struct fc_frame_header *fh;
>>> -	struct fc_rport_priv *rdata = rport->dd_data;
>>>  	struct fc_lport *lport = rdata->local_port;
>>>  
>>>  	fh = fc_frame_header_get(fp);
>>>  
>>> -	FC_RPORT_DBG(rport, "Received LOGO request while in state %s\n",
>>> -		     fc_rport_state(rport));
>>> +	FC_RPORT_DBG(rdata, "Received LOGO request while in state %s\n",
>>> +		     fc_rport_state(rdata));
>>>  
>>>  	if (rdata->rp_state == RPORT_ST_DELETE) {
>>>  		fc_frame_free(fp);
>>> @@ -1293,7 +1281,7 @@ static void fc_rport_recv_logo_req(struct fc_rport *rport, struct fc_seq *sp,
>>>  	}
>>>  
>>>  	rdata->event = RPORT_EV_LOGO;
>>> -	fc_rport_state_enter(rport, RPORT_ST_DELETE);
>>> +	fc_rport_state_enter(rdata, RPORT_ST_DELETE);
>>>  	queue_work(rport_event_queue, &rdata->event_work);
>>>  
>>>  	lport->tt.seq_els_rsp_send(sp, ELS_LS_ACC, NULL);
>>> diff --git a/include/scsi/fc_encode.h b/include/scsi/fc_encode.h
>>> index a0ff61c..db29001 100644
>>> --- a/include/scsi/fc_encode.h
>>> +++ b/include/scsi/fc_encode.h
>>> @@ -249,10 +249,13 @@ static inline void fc_scr_fill(struct fc_lport *lport, struct fc_frame *fp)
>>>  /**
>>>   * fc_els_fill - Fill in an ELS  request frame
>>>   */
>>> -static inline int fc_els_fill(struct fc_lport *lport, struct fc_rport *rport,
>>> +static inline int fc_els_fill(struct fc_lport *lport,
>>> +		       struct fc_rport_priv *rdata,
>>>  		       struct fc_frame *fp, unsigned int op,
>>>  		       enum fc_rctl *r_ctl, u32 *did, enum fc_fh_type *fh_type)
>>>  {
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);
>>> +
>>>  	switch (op) {
>>>  	case ELS_PLOGI:
>>>  		fc_plogi_fill(lport, fp, ELS_PLOGI);
>>> diff --git a/include/scsi/libfc.h b/include/scsi/libfc.h
>>> index f2d5ddf..8a012f9 100644
>>> --- a/include/scsi/libfc.h
>>> +++ b/include/scsi/libfc.h
>>> @@ -76,10 +76,10 @@ do {								\
>>>  				(lport)->host->host_no,			\
>>>  				(port_id), ##args))
>>>  
>>> -#define FC_RPORT_DBG(rport, fmt, args...)				\
>>> +#define FC_RPORT_DBG(rdata, fmt, args...)				\
>>>  do {									\
>>> -	struct fc_rport_priv *rdata = rport->dd_data;			\
>>>  	struct fc_lport *lport = rdata->local_port;			\
>>> +	struct fc_rport *rport = PRIV_TO_RPORT(rdata);			\
>>>  	FC_RPORT_ID_DBG(lport, rport->port_id, fmt, ##args);		\
>>>  } while (0)
>>>  
>>> @@ -186,8 +186,10 @@ enum fc_rport_event {
>>>   */
>>>  #define fc_rport_priv fc_rport_libfc_priv
>>>  
>>> +struct fc_rport_priv;
>>> +
>>>  struct fc_rport_operations {
>>> -	void (*event_callback)(struct fc_lport *, struct fc_rport *,
>>> +	void (*event_callback)(struct fc_lport *, struct fc_rport_priv *,
>>>  			       enum fc_rport_event);
>>>  };
>>>  
>>> @@ -429,7 +431,7 @@ struct libfc_function_template {
>>>  	 * STATUS: OPTIONAL
>>>  	 */
>>>  	struct fc_seq *(*elsct_send)(struct fc_lport *lport,
>>> -				     struct fc_rport *rport,
>>> +				     struct fc_rport_priv *,
>>>  				     struct fc_frame *fp,
>>>  				     unsigned int op,
>>>  				     void (*resp)(struct fc_seq *,
>>> @@ -574,8 +576,8 @@ struct libfc_function_template {
>>>  	/*
>>>  	 * Create a remote port
>>>  	 */
>>> -	struct fc_rport *(*rport_create)(struct fc_lport *,
>>> -					 struct fc_rport_identifiers *);
>>> +	struct fc_rport_priv *(*rport_create)(struct fc_lport *,
>>> +					      struct fc_rport_identifiers *);
>>>  
>>>  	/*
>>>  	 * Initiates the RP state machine. It is called from the LP module.
>>> @@ -588,7 +590,7 @@ struct libfc_function_template {
>>>  	 *
>>>  	 * STATUS: OPTIONAL
>>>  	 */
>>> -	int (*rport_login)(struct fc_rport *rport);
>>> +	int (*rport_login)(struct fc_rport_priv *);
>>>  
>>>  	/*
>>>  	 * Logoff, and remove the rport from the transport if
>>> @@ -596,7 +598,7 @@ struct libfc_function_template {
>>>  	 *
>>>  	 * STATUS: OPTIONAL
>>>  	 */
>>> -	int (*rport_logoff)(struct fc_rport *rport);
>>> +	int (*rport_logoff)(struct fc_rport_priv *);
>>>  
>>>  	/*
>>>  	 * Recieve a request from a remote port.
>>> @@ -604,14 +606,14 @@ struct libfc_function_template {
>>>  	 * STATUS: OPTIONAL
>>>  	 */
>>>  	void (*rport_recv_req)(struct fc_seq *, struct fc_frame *,
>>> -			       struct fc_rport *);
>>> +			       struct fc_rport_priv *);
>>>  
>>>  	/*
>>>  	 * lookup an rport by it's port ID.
>>>  	 *
>>>  	 * STATUS: OPTIONAL
>>>  	 */
>>> -	struct fc_rport *(*rport_lookup)(const struct fc_lport *, u32);
>>> +	struct fc_rport_priv *(*rport_lookup)(const struct fc_lport *, u32);
>>>  
>>>  	/*
>>>  	 * Send a fcp cmd from fsp pkt.
>>> @@ -701,8 +703,8 @@ struct fc_lport {
>>>  	/* Associations */
>>>  	struct Scsi_Host	*host;
>>>  	struct list_head	ema_list;
>>> -	struct fc_rport		*dns_rp;
>>> -	struct fc_rport		*ptp_rp;
>>> +	struct fc_rport_priv	*dns_rp;
>>> +	struct fc_rport_priv	*ptp_rp;
>>>  	void			*scsi_priv;
>>>  	struct fc_disc          disc;
>>>  
>>>
>>>
>>> _______________________________________________
>>> devel mailing list
>>> devel at open-fcoe.org
>>> http://www.open-fcoe.org/mailman/listinfo/devel
> 
> _______________________________________________
> devel mailing list
> devel at open-fcoe.org
> http://www.open-fcoe.org/mailman/listinfo/devel




More information about the devel mailing list