Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next
authorDavid S. Miller <davem@davemloft.net>
Mon, 25 Jul 2016 05:02:36 +0000 (22:02 -0700)
committerDavid S. Miller <davem@davemloft.net>
Mon, 25 Jul 2016 05:02:36 +0000 (22:02 -0700)
Pablo Neira Ayuso says:

====================
Netfilter/IPVS updates for net-next

The following patchset contains Netfilter/IPVS updates for net-next,
they are:

1) Count pre-established connections as active in "least connection"
   schedulers such that pre-established connections to avoid overloading
   backend servers on peak demands, from Michal Kubecek via Simon Horman.

2) Address a race condition when resizing the conntrack table by caching
   the bucket size when fulling iterating over the hashtable in these
   three possible scenarios: 1) dump via /proc/net/nf_conntrack,
   2) unlinking userspace helper and 3) unlinking custom conntrack timeout.
   From Liping Zhang.

3) Revisit early_drop() path to perform lockless traversal on conntrack
   eviction under stress, use del_timer() as synchronization point to
   avoid two CPUs evicting the same entry, from Florian Westphal.

4) Move NAT hlist_head to nf_conn object, this simplifies the existing
   NAT extension and it doesn't increase size since recent patches to
   align nf_conn, from Florian.

5) Use rhashtable for the by-source NAT hashtable, also from Florian.

6) Don't allow --physdev-is-out from OUTPUT chain, just like
   --physdev-out is not either, from Hangbin Liu.

7) Automagically set on nf_conntrack counters if the user tries to
   match ct bytes/packets from nftables, from Liping Zhang.

8) Remove possible_net_t fields in nf_tables set objects since we just
   simply pass the net pointer to the backend set type implementations.

9) Fix possible off-by-one in h323, from Toby DiPasquale.

10) early_drop() may be called from ctnetlink patch, so we must hold
    rcu read size lock from them too, this amends Florian's patch #3
    coming in this batch, from Liping Zhang.

11) Use binary search to validate jump offset in x_tables, this
    addresses the O(n!) validation that was introduced recently
    resolve security issues with unpriviledge namespaces, from Florian.

12) Fix reference leak to connlabel in error path of nft_ct, from Zhang.

13) Three updates for nft_log: Fix log prefix leak in error path. Bail
    out on loglevel larger than debug in nft_log and set on the new
    NF_LOG_F_COPY_LEN flag when snaplen is specified. Again from Zhang.

14) Allow to filter rule dumps in nf_tables based on table and chain
    names.

15) Simplify connlabel to always use 128 bits to store labels and
    get rid of unused function in xt_connlabel, from Florian.

16) Replace set_expect_timeout() by mod_timer() from the h323 conntrack
    helper, by Gao Feng.

17) Put back x_tables module reference in nft_compat on error, from
    Liping Zhang.

18) Add a reference count to the x_tables extensions cache in
    nft_compat, so we can remove them when unused and avoid a crash
    if the extensions are rmmod, again from Zhang.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
718 files changed:
Documentation/devicetree/bindings/net/dsa/b53.txt
Documentation/devicetree/bindings/net/hisilicon-femac-mdio.txt [new file with mode: 0644]
Documentation/devicetree/bindings/net/hisilicon-femac.txt [new file with mode: 0644]
Documentation/devicetree/bindings/net/micrel.txt
Documentation/devicetree/bindings/net/socfpga-dwmac.txt
Documentation/gdb-kernel-debugging.txt
Documentation/networking/rds.txt
Documentation/networking/vrf.txt
Documentation/x86/intel_mpx.txt
Documentation/x86/tlb.txt
Documentation/x86/x86_64/machinecheck
MAINTAINERS
Makefile
arch/arm/boot/dts/armada-385-linksys.dtsi
arch/arm/boot/dts/sun4i-a10.dtsi
arch/arm/boot/dts/sun5i-a10s.dtsi
arch/arm/boot/dts/sun5i-r8-chip.dts
arch/arm/boot/dts/sun7i-a20.dtsi
arch/arm/boot/dts/tegra30-beaver.dts
arch/arm/mach-mvebu/Makefile
arch/arm/mach-mvebu/coherency.c
arch/arm64/include/asm/cputype.h
arch/arm64/include/asm/ptrace.h
arch/arm64/kernel/asm-offsets.c
arch/arm64/kernel/cpu_errata.c
arch/arm64/kernel/entry.S
arch/arm64/mm/fault.c
arch/m32r/boot/compressed/m32r_sio.c
arch/m68k/coldfire/head.S
arch/m68k/coldfire/m5272.c
arch/m68k/coldfire/pci.c
arch/m68k/configs/amiga_defconfig
arch/m68k/configs/apollo_defconfig
arch/m68k/configs/atari_defconfig
arch/m68k/configs/bvme6000_defconfig
arch/m68k/configs/hp300_defconfig
arch/m68k/configs/mac_defconfig
arch/m68k/configs/multi_defconfig
arch/m68k/configs/mvme147_defconfig
arch/m68k/configs/mvme16x_defconfig
arch/m68k/configs/q40_defconfig
arch/m68k/configs/sun3_defconfig
arch/m68k/configs/sun3x_defconfig
arch/m68k/ifpsp060/src/fpsp.S
arch/m68k/ifpsp060/src/pfpsp.S
arch/m68k/include/asm/dma.h
arch/m68k/include/asm/m525xsim.h
arch/m68k/include/asm/mcfmmu.h
arch/m68k/include/asm/q40_master.h
arch/m68k/mac/iop.c
arch/m68k/math-emu/fp_decode.h
arch/mips/include/asm/pgtable.h
arch/s390/kernel/perf_cpum_sf.c
arch/x86/events/amd/ibs.c
arch/x86/events/core.c
arch/x86/events/intel/Makefile
arch/x86/events/intel/core.c
arch/x86/include/asm/cpufeatures.h
arch/x86/kernel/amd_nb.c
arch/x86/kernel/early-quirks.c
arch/x86/mm/kasan_init_64.c
arch/x86/pci/acpi.c
arch/x86/power/hibernate_64.c
arch/x86/power/hibernate_asm_64.S
block/ioprio.c
crypto/asymmetric_keys/mscode_parser.c
crypto/asymmetric_keys/pkcs7_verify.c
crypto/asymmetric_keys/restrict.c
crypto/rsa-pkcs1pad.c
drivers/acpi/acpi_dbg.c
drivers/acpi/acpica/exconfig.c
drivers/acpi/acpica/nsparse.c
drivers/acpi/ec.c
drivers/acpi/nfit.c
drivers/acpi/pci_link.c
drivers/acpi/utils.c
drivers/ata/libata-core.c
drivers/atm/nicstar.c
drivers/bcma/bcma_private.h
drivers/block/xen-blkfront.c
drivers/bluetooth/ath3k.c
drivers/bluetooth/bpa10x.c
drivers/bluetooth/btmrvl_main.c
drivers/bluetooth/btmrvl_sdio.c
drivers/bluetooth/btsdio.c
drivers/bluetooth/btusb.c
drivers/bluetooth/btwilink.c
drivers/bluetooth/hci_intel.c
drivers/bluetooth/hci_ldisc.c
drivers/bluetooth/hci_vhci.c
drivers/clk/at91/clk-programmable.c
drivers/clk/sunxi/clk-sun4i-display.c
drivers/clk/sunxi/clk-sun4i-tcon-ch1.c
drivers/cpuidle/cpuidle.c
drivers/crypto/qat/qat_common/Makefile
drivers/edac/sb_edac.c
drivers/gpio/Kconfig
drivers/gpio/gpio-sch.c
drivers/gpio/gpiolib-legacy.c
drivers/gpio/gpiolib.c
drivers/gpu/drm/amd/amdgpu/atombios_i2c.c
drivers/gpu/drm/amd/amdgpu/atombios_i2c.h
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
drivers/gpu/drm/amd/powerplay/hwmgr/polaris10_hwmgr.c
drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.c
drivers/gpu/drm/amd/powerplay/hwmgr/ppatomctrl.h
drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
drivers/gpu/drm/amd/powerplay/hwmgr/tonga_processpptables.c
drivers/gpu/drm/i915/i915_drv.c
drivers/gpu/drm/i915/i915_drv.h
drivers/gpu/drm/i915/i915_gem_shrinker.c
drivers/gpu/drm/i915/i915_gem_stolen.c
drivers/gpu/drm/i915/i915_irq.c
drivers/gpu/drm/i915/i915_reg.h
drivers/gpu/drm/i915/intel_csr.c
drivers/gpu/drm/i915/intel_display.c
drivers/gpu/drm/i915/intel_dp.c
drivers/gpu/drm/i915/intel_lrc.c
drivers/gpu/drm/i915/intel_opregion.c
drivers/gpu/drm/i915/intel_panel.c
drivers/gpu/drm/i915/intel_pm.c
drivers/gpu/drm/i915/intel_ringbuffer.c
drivers/gpu/drm/nouveau/nvkm/engine/disp/sorgf119.c
drivers/gpu/drm/sun4i/sun4i_crtc.c
drivers/gpu/drm/sun4i/sun4i_drv.c
drivers/gpu/drm/ttm/ttm_bo.c
drivers/gpu/drm/vmwgfx/vmwgfx_dmabuf.c
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
drivers/i2c/busses/i2c-qup.c
drivers/i2c/busses/i2c-tegra.c
drivers/i2c/i2c-boardinfo.c
drivers/i2c/muxes/i2c-mux-reg.c
drivers/infiniband/core/sysfs.c
drivers/infiniband/hw/hfi1/chip.c
drivers/infiniband/hw/hfi1/ud.c
drivers/infiniband/hw/i40iw/i40iw_main.c
drivers/infiniband/hw/i40iw/i40iw_verbs.c
drivers/infiniband/hw/mlx4/qp.c
drivers/input/joystick/xpad.c
drivers/input/rmi4/rmi_bus.c
drivers/input/rmi4/rmi_f12.c
drivers/input/touchscreen/ts4800-ts.c
drivers/input/touchscreen/tsc2004.c
drivers/input/touchscreen/tsc2005.c
drivers/input/touchscreen/tsc200x-core.c
drivers/input/touchscreen/tsc200x-core.h
drivers/input/touchscreen/wacom_w8001.c
drivers/iommu/amd_iommu_init.c
drivers/iommu/intel-iommu.c
drivers/irqchip/irq-mips-gic.c
drivers/media/i2c/adv7604.c
drivers/media/radio/wl128x/fmdrv_common.c
drivers/media/usb/airspy/airspy.c
drivers/media/v4l2-core/v4l2-ioctl.c
drivers/misc/ti-st/st_core.c
drivers/mmc/card/block.c
drivers/mmc/host/pxamci.c
drivers/mtd/nand/omap2.c
drivers/net/bonding/bond_netlink.c
drivers/net/dsa/b53/b53_common.c
drivers/net/dsa/b53/b53_priv.h
drivers/net/dsa/b53/b53_srab.c
drivers/net/dsa/mv88e6xxx/chip.c
drivers/net/dsa/mv88e6xxx/mv88e6xxx.h
drivers/net/ethernet/adi/bfin_mac.c
drivers/net/ethernet/adi/bfin_mac.h
drivers/net/ethernet/agere/et131x.c
drivers/net/ethernet/amd/au1000_eth.c
drivers/net/ethernet/amd/au1000_eth.h
drivers/net/ethernet/aurora/nb8800.c
drivers/net/ethernet/broadcom/Kconfig
drivers/net/ethernet/broadcom/Makefile
drivers/net/ethernet/broadcom/bgmac-bcma-mdio.c [new file with mode: 0644]
drivers/net/ethernet/broadcom/bgmac-bcma.c [new file with mode: 0644]
drivers/net/ethernet/broadcom/bgmac-platform.c [new file with mode: 0644]
drivers/net/ethernet/broadcom/bgmac.c
drivers/net/ethernet/broadcom/bgmac.h
drivers/net/ethernet/broadcom/bnxt/bnxt.c
drivers/net/ethernet/broadcom/bnxt/bnxt.h
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
drivers/net/ethernet/broadcom/genet/bcmgenet.c
drivers/net/ethernet/ethoc.c
drivers/net/ethernet/ezchip/nps_enet.c
drivers/net/ethernet/faraday/ftgmac100.c
drivers/net/ethernet/hisilicon/Kconfig
drivers/net/ethernet/hisilicon/Makefile
drivers/net/ethernet/hisilicon/hisi_femac.c [new file with mode: 0644]
drivers/net/ethernet/ibm/ibmvnic.c
drivers/net/ethernet/ibm/ibmvnic.h
drivers/net/ethernet/intel/fm10k/fm10k.h
drivers/net/ethernet/intel/fm10k/fm10k_common.c
drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
drivers/net/ethernet/intel/fm10k/fm10k_main.c
drivers/net/ethernet/intel/fm10k/fm10k_mbx.h
drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
drivers/net/ethernet/intel/fm10k/fm10k_pci.c
drivers/net/ethernet/intel/fm10k/fm10k_pf.c
drivers/net/ethernet/intel/fm10k/fm10k_type.h
drivers/net/ethernet/intel/fm10k/fm10k_vf.c
drivers/net/ethernet/intel/i40e/i40e_main.c
drivers/net/ethernet/intel/i40e/i40e_txrx.c
drivers/net/ethernet/intel/i40evf/i40e_txrx.c
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
drivers/net/ethernet/marvell/mvneta.c
drivers/net/ethernet/marvell/pxa168_eth.c
drivers/net/ethernet/mediatek/mtk_eth_soc.c
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
drivers/net/ethernet/mellanox/mlx4/en_rx.c
drivers/net/ethernet/mellanox/mlx4/en_tx.c
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
drivers/net/ethernet/mellanox/mlx5/core/vxlan.c
drivers/net/ethernet/mellanox/mlxsw/core.c
drivers/net/ethernet/mellanox/mlxsw/reg.h
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
drivers/net/ethernet/pasemi/pasemi_mac.c
drivers/net/ethernet/pasemi/pasemi_mac.h
drivers/net/ethernet/pasemi/pasemi_mac_ethtool.c
drivers/net/ethernet/renesas/ravb_main.c
drivers/net/ethernet/renesas/sh_eth.c
drivers/net/ethernet/smsc/smc91x.c
drivers/net/ethernet/smsc/smsc9420.c
drivers/net/ethernet/stmicro/stmmac/Makefile
drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c [new file with mode: 0644]
drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h [new file with mode: 0644]
drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
drivers/net/ethernet/synopsys/dwc_eth_qos.c
drivers/net/ethernet/ti/Kconfig
drivers/net/ethernet/ti/cpmac.c
drivers/net/ethernet/toshiba/tc35815.c
drivers/net/ethernet/xilinx/ll_temac.h
drivers/net/ethernet/xilinx/ll_temac_main.c
drivers/net/ethernet/xilinx/xilinx_axienet.h
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
drivers/net/geneve.c
drivers/net/hyperv/netvsc.c
drivers/net/ieee802154/atusb.c
drivers/net/ieee802154/fakelb.c
drivers/net/ieee802154/mrf24j40.c
drivers/net/macsec.c
drivers/net/macvtap.c
drivers/net/phy/Kconfig
drivers/net/phy/Makefile
drivers/net/phy/marvell.c
drivers/net/phy/mdio-hisi-femac.c [new file with mode: 0644]
drivers/net/phy/micrel.c
drivers/net/ppp/ppp_generic.c
drivers/net/tun.c
drivers/net/usb/ax88172a.c
drivers/net/usb/r8152.c
drivers/net/usb/rndis_host.c
drivers/net/virtio_net.c
drivers/net/vxlan.c
drivers/net/wan/fsl_ucc_hdlc.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.h
drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.h
drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
drivers/net/wireless/broadcom/brcm80211/brcmsmac/main.c
drivers/net/wireless/cisco/airo.c
drivers/net/wireless/intel/ipw2x00/ipw2200.c
drivers/net/wireless/intel/iwlwifi/Makefile
drivers/net/wireless/intel/iwlwifi/dvm/lib.c
drivers/net/wireless/intel/iwlwifi/dvm/main.c
drivers/net/wireless/intel/iwlwifi/dvm/rxon.c
drivers/net/wireless/intel/iwlwifi/iwl-9000.c
drivers/net/wireless/intel/iwlwifi/iwl-a000.c [new file with mode: 0644]
drivers/net/wireless/intel/iwlwifi/iwl-config.h
drivers/net/wireless/intel/iwlwifi/iwl-csr.h
drivers/net/wireless/intel/iwlwifi/iwl-debug.h
drivers/net/wireless/intel/iwlwifi/iwl-devtrace-io.h
drivers/net/wireless/intel/iwlwifi/iwl-devtrace.h
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c
drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h
drivers/net/wireless/intel/iwlwifi/iwl-fh.h
drivers/net/wireless/intel/iwlwifi/iwl-fw-error-dump.h
drivers/net/wireless/intel/iwlwifi/iwl-fw-file.h
drivers/net/wireless/intel/iwlwifi/iwl-fw.h
drivers/net/wireless/intel/iwlwifi/iwl-io.c
drivers/net/wireless/intel/iwlwifi/iwl-io.h
drivers/net/wireless/intel/iwlwifi/iwl-modparams.h
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
drivers/net/wireless/intel/iwlwifi/iwl-prph.h
drivers/net/wireless/intel/iwlwifi/iwl-trans.h
drivers/net/wireless/intel/iwlwifi/mvm/coex.c
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
drivers/net/wireless/intel/iwlwifi/mvm/fw-api-coex.h
drivers/net/wireless/intel/iwlwifi/mvm/fw-api-mac.h
drivers/net/wireless/intel/iwlwifi/mvm/fw-api-power.h
drivers/net/wireless/intel/iwlwifi/mvm/fw-api-rx.h
drivers/net/wireless/intel/iwlwifi/mvm/fw-api-sta.h
drivers/net/wireless/intel/iwlwifi/mvm/fw-api-stats.h
drivers/net/wireless/intel/iwlwifi/mvm/fw-api-tx.h
drivers/net/wireless/intel/iwlwifi/mvm/fw-api.h
drivers/net/wireless/intel/iwlwifi/mvm/fw-dbg.c
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
drivers/net/wireless/intel/iwlwifi/mvm/ops.c
drivers/net/wireless/intel/iwlwifi/mvm/power.c
drivers/net/wireless/intel/iwlwifi/mvm/rs.c
drivers/net/wireless/intel/iwlwifi/mvm/rs.h
drivers/net/wireless/intel/iwlwifi/mvm/rx.c
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
drivers/net/wireless/intel/iwlwifi/mvm/scan.c
drivers/net/wireless/intel/iwlwifi/mvm/sf.c
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
drivers/net/wireless/intel/iwlwifi/mvm/sta.h
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
drivers/net/wireless/intel/iwlwifi/mvm/utils.c
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
drivers/net/wireless/intel/iwlwifi/pcie/internal.h
drivers/net/wireless/intel/iwlwifi/pcie/rx.c
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
drivers/net/wireless/marvell/libertas/cmdresp.c
drivers/net/wireless/marvell/libertas_tf/main.c
drivers/net/wireless/marvell/mwifiex/cfg80211.c
drivers/net/wireless/marvell/mwifiex/cmdevt.c
drivers/net/wireless/marvell/mwifiex/fw.h
drivers/net/wireless/marvell/mwifiex/init.c
drivers/net/wireless/marvell/mwifiex/ioctl.h
drivers/net/wireless/marvell/mwifiex/join.c
drivers/net/wireless/marvell/mwifiex/main.h
drivers/net/wireless/marvell/mwifiex/pcie.c
drivers/net/wireless/marvell/mwifiex/pcie.h
drivers/net/wireless/marvell/mwifiex/scan.c
drivers/net/wireless/marvell/mwifiex/sta_cmd.c
drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c
drivers/net/wireless/marvell/mwifiex/sta_event.c
drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
drivers/net/wireless/marvell/mwifiex/uap_cmd.c
drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h
drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192c.c
drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723a.c
drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c
drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_regs.h
drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c
drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.h
drivers/net/wireless/realtek/rtlwifi/core.c
drivers/net/wireless/realtek/rtlwifi/debug.c
drivers/net/wireless/realtek/rtlwifi/debug.h
drivers/net/wireless/realtek/rtlwifi/efuse.c
drivers/net/wireless/realtek/rtlwifi/efuse.h
drivers/net/wireless/realtek/rtlwifi/ps.c
drivers/net/wireless/realtek/rtlwifi/ps.h
drivers/net/wireless/realtek/rtlwifi/rc.c
drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c
drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
drivers/net/wireless/realtek/rtlwifi/rtl8188ee/phy.c
drivers/net/wireless/realtek/rtlwifi/rtl8188ee/rf.c
drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c
drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.h
drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.h
drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.c
drivers/net/wireless/realtek/rtlwifi/rtl8192c/phy_common.h
drivers/net/wireless/realtek/rtlwifi/rtl8192ce/hw.c
drivers/net/wireless/realtek/rtlwifi/rtl8192ce/phy.h
drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.c
drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.h
drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c
drivers/net/wireless/realtek/rtlwifi/rtl8192cu/mac.c
drivers/net/wireless/realtek/rtlwifi/rtl8192cu/mac.h
drivers/net/wireless/realtek/rtlwifi/rtl8192cu/rf.c
drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c
drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.c
drivers/net/wireless/realtek/rtlwifi/rtl8192de/phy.h
drivers/net/wireless/realtek/rtlwifi/rtl8192de/rf.c
drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c
drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h
drivers/net/wireless/realtek/rtlwifi/rtl8192ee/dm.c
drivers/net/wireless/realtek/rtlwifi/rtl8192ee/hw.c
drivers/net/wireless/realtek/rtlwifi/rtl8192ee/phy.c
drivers/net/wireless/realtek/rtlwifi/rtl8192ee/phy.h
drivers/net/wireless/realtek/rtlwifi/rtl8192ee/rf.c
drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.c
drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.h
drivers/net/wireless/realtek/rtlwifi/rtl8192se/hw.c
drivers/net/wireless/realtek/rtlwifi/rtl8192se/rf.c
drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.c
drivers/net/wireless/realtek/rtlwifi/rtl8723ae/dm.c
drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_btc.c
drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
drivers/net/wireless/realtek/rtlwifi/rtl8723ae/phy.c
drivers/net/wireless/realtek/rtlwifi/rtl8723ae/rf.c
drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.c
drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.h
drivers/net/wireless/realtek/rtlwifi/rtl8723be/dm.c
drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
drivers/net/wireless/realtek/rtlwifi/rtl8723be/phy.c
drivers/net/wireless/realtek/rtlwifi/rtl8723be/rf.c
drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.c
drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.h
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.h
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/rf.c
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h
drivers/net/wireless/realtek/rtlwifi/stats.c
drivers/net/wireless/realtek/rtlwifi/stats.h
drivers/net/wireless/realtek/rtlwifi/wifi.h
drivers/net/wireless/ti/wl18xx/event.c
drivers/net/wireless/ti/wl18xx/event.h
drivers/net/wireless/ti/wlcore/cmd.c
drivers/net/wireless/ti/wlcore/main.c
drivers/net/wireless/ti/wlcore/sdio.c
drivers/nfc/Kconfig
drivers/nfc/fdp/fdp.c
drivers/nfc/nfcsim.c
drivers/nfc/nfcwilink.c
drivers/nfc/pn533/usb.c
drivers/nfc/port100.c
drivers/nfc/trf7970a.c
drivers/nvme/host/core.c
drivers/of/of_mdio.c
drivers/pps/clients/pps_parport.c
drivers/regulator/qcom_smd-regulator.c
drivers/scsi/ipr.c
drivers/scsi/qla2xxx/qla_isr.c
drivers/scsi/scsi_devinfo.c
drivers/tty/vt/keyboard.c
drivers/xen/xen-acpi-processor.c
drivers/xen/xenbus/xenbus_dev_frontend.c
drivers/xen/xenbus/xenbus_xs.c
fs/9p/vfs_inode.c
fs/9p/vfs_inode_dotl.c
fs/ceph/file.c
fs/cifs/dir.c
fs/configfs/file.c
fs/ecryptfs/crypto.c
fs/ecryptfs/file.c
fs/ecryptfs/kthread.c
fs/ecryptfs/main.c
fs/fs-writeback.c
fs/fuse/dir.c
fs/gfs2/inode.c
fs/nfs/dir.c
fs/overlayfs/dir.c
fs/overlayfs/inode.c
fs/overlayfs/overlayfs.h
fs/xfs/xfs_ioctl.c
include/acpi/acpi_drivers.h
include/acpi/acpixf.h
include/asm-generic/vmlinux.lds.h
include/drm/ttm/ttm_bo_api.h
include/linux/bcma/bcma.h
include/linux/bpf.h
include/linux/filter.h
include/linux/huge_mm.h
include/linux/ieee802154.h
include/linux/memcontrol.h
include/linux/micrel_phy.h
include/linux/mlx4/qp.h
include/linux/mlx5/driver.h
include/linux/mlx5/mlx5_ifc.h
include/linux/mroute.h
include/linux/mroute6.h
include/linux/netdevice.h
include/linux/of_mdio.h
include/linux/perf_event.h
include/linux/posix_acl.h
include/linux/radix-tree.h
include/linux/rbtree.h
include/linux/rbtree_augmented.h
include/linux/rcupdate.h
include/linux/rmap.h
include/linux/ti_wilink_st.h
include/net/bluetooth/hci.h
include/net/bluetooth/hci_core.h
include/net/bluetooth/mgmt.h
include/net/cfg802154.h
include/net/dsa.h
include/net/gro_cells.h
include/net/ip.h
include/net/mac802154.h
include/net/ncsi.h [new file with mode: 0644]
include/net/netfilter/nf_conntrack.h
include/net/nfc/digital.h
include/net/nfc/llc.h
include/net/nl802154.h
include/net/sctp/structs.h
include/net/sctp/ulpevent.h
include/net/sock.h
include/net/switchdev.h
include/trace/events/devlink.h [new file with mode: 0644]
include/trace/events/napi.h
include/uapi/linux/Kbuild
include/uapi/linux/bpf.h
include/uapi/linux/elf-em.h
include/uapi/linux/if_bridge.h
include/uapi/linux/if_ether.h
include/uapi/linux/if_link.h
include/uapi/linux/input-event-codes.h
include/uapi/linux/sctp.h
init/Kconfig
kernel/bpf/arraymap.c
kernel/bpf/core.c
kernel/bpf/inode.c
kernel/bpf/syscall.c
kernel/bpf/verifier.c
kernel/cpu.c
kernel/events/core.c
kernel/events/internal.h
kernel/gcov/gcc_4_7.c
kernel/sched/core.c
kernel/sched/fair.c
kernel/sched/loadavg.c
kernel/sched/sched.h
kernel/time/posix-cpu-timers.c
kernel/trace/bpf_trace.c
kernel/workqueue.c
lib/rbtree.c
mm/compaction.c
mm/huge_memory.c
mm/hugetlb.c
mm/kasan/quarantine.c
mm/memcontrol.c
mm/memory.c
mm/page_alloc.c
mm/rmap.c
mm/shmem.c
mm/slab_common.c
mm/workingset.c
net/6lowpan/ndisc.c
net/8021q/vlan_dev.c
net/8021q/vlan_netlink.c
net/Kconfig
net/Makefile
net/batman-adv/bridge_loop_avoidance.c
net/batman-adv/distributed-arp-table.c
net/batman-adv/originator.c
net/batman-adv/routing.c
net/batman-adv/send.c
net/batman-adv/types.h
net/bluetooth/af_bluetooth.c
net/bluetooth/hci_conn.c
net/bluetooth/hci_core.c
net/bluetooth/hci_debugfs.c
net/bluetooth/hci_event.c
net/bluetooth/hci_sock.c
net/bluetooth/hci_sysfs.c
net/bluetooth/l2cap_core.c
net/bluetooth/l2cap_sock.c
net/bluetooth/mgmt.c
net/bluetooth/smp.c
net/bridge/br_device.c
net/bridge/br_forward.c
net/bridge/br_input.c
net/bridge/br_multicast.c
net/bridge/br_private.h
net/bridge/netfilter/nft_reject_bridge.c
net/core/dev.c
net/core/devlink.c
net/core/drop_monitor.c
net/core/filter.c
net/core/netpoll.c
net/core/rtnetlink.c
net/core/sock.c
net/dccp/ipv4.c
net/dccp/ipv6.c
net/dsa/dsa.c
net/dsa/dsa2.c
net/dsa/slave.c
net/ieee802154/6lowpan/core.c
net/ieee802154/6lowpan/rx.c
net/ieee802154/core.c
net/ieee802154/core.h
net/ieee802154/nl802154.c
net/ipv4/af_inet.c
net/ipv4/devinet.c
net/ipv4/fib_semantics.c
net/ipv4/ip_forward.c
net/ipv4/ip_output.c
net/ipv4/ip_tunnel_core.c
net/ipv4/ipip.c
net/ipv4/ipmr.c
net/ipv4/tcp_input.c
net/ipv4/tcp_timer.c
net/ipv4/tunnel4.c
net/ipv4/udp.c
net/ipv6/addrconf.c
net/ipv6/ip6mr.c
net/ipv6/sit.c
net/ipv6/udp.c
net/mpls/af_mpls.c
net/ncsi/Kconfig [new file with mode: 0644]
net/ncsi/Makefile [new file with mode: 0644]
net/ncsi/internal.h [new file with mode: 0644]
net/ncsi/ncsi-aen.c [new file with mode: 0644]
net/ncsi/ncsi-cmd.c [new file with mode: 0644]
net/ncsi/ncsi-manage.c [new file with mode: 0644]
net/ncsi/ncsi-pkt.h [new file with mode: 0644]
net/ncsi/ncsi-rsp.c [new file with mode: 0644]
net/netfilter/ipvs/ip_vs_sync.c
net/netfilter/nf_conntrack_core.c
net/netfilter/nf_tables_api.c
net/netfilter/nft_ct.c
net/netfilter/nft_meta.c
net/nfc/digital_core.c
net/nfc/digital_dep.c
net/nfc/digital_technology.c
net/nfc/hci/llc.c
net/nfc/llcp_commands.c
net/nfc/llcp_core.c
net/packet/af_packet.c
net/rds/bind.c
net/rds/connection.c
net/rds/message.c
net/rds/rds.h
net/rds/recv.c
net/rds/send.c
net/rds/tcp.c
net/rds/tcp.h
net/rds/tcp_connect.c
net/rds/tcp_listen.c
net/rds/tcp_send.c
net/rds/threads.c
net/rose/rose_in.c
net/rxrpc/Makefile
net/rxrpc/af_rxrpc.c
net/rxrpc/ar-internal.h
net/rxrpc/call_accept.c
net/rxrpc/call_event.c
net/rxrpc/call_object.c
net/rxrpc/conn_client.c
net/rxrpc/conn_event.c
net/rxrpc/conn_object.c
net/rxrpc/conn_service.c [new file with mode: 0644]
net/rxrpc/input.c
net/rxrpc/insecure.c
net/rxrpc/local_object.c
net/rxrpc/peer_object.c
net/rxrpc/proc.c
net/rxrpc/rxkad.c
net/rxrpc/utils.c
net/sched/sch_hfsc.c
net/sched/sch_htb.c
net/sctp/associola.c
net/sctp/chunk.c
net/sctp/endpointola.c
net/sctp/input.c
net/sctp/inqueue.c
net/sctp/ipv6.c
net/sctp/offload.c
net/sctp/output.c
net/sctp/outqueue.c
net/sctp/protocol.c
net/sctp/sm_make_chunk.c
net/sctp/sm_statefuns.c
net/sctp/socket.c
net/sctp/ulpevent.c
net/switchdev/switchdev.c
net/tipc/bearer.c
net/tipc/bearer.h
net/tipc/link.c
net/tipc/node.c
net/wireless/nl80211.c
net/wireless/util.c
samples/bpf/Makefile
samples/bpf/bpf_load.c
samples/bpf/xdp1_kern.c [new file with mode: 0644]
samples/bpf/xdp1_user.c [new file with mode: 0644]
samples/bpf/xdp2_kern.c [new file with mode: 0644]
samples/pktgen/parameters.sh
samples/pktgen/pktgen.conf-1-1-flows [deleted file]
samples/pktgen/pktgen.conf-1-1-rdos [deleted file]
samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh
samples/pktgen/pktgen_bench_xmit_mode_queue_xmit.sh
samples/pktgen/pktgen_sample01_simple.sh
samples/pktgen/pktgen_sample02_multiqueue.sh
samples/pktgen/pktgen_sample03_burst_single_flow.sh
samples/pktgen/pktgen_sample04_many_flows.sh [new file with mode: 0755]
samples/pktgen/pktgen_sample05_flow_per_thread.sh [new file with mode: 0755]
scripts/gdb/linux/.gitignore
scripts/gdb/linux/Makefile
scripts/gdb/linux/constants.py.in
scripts/gdb/linux/radixtree.py [deleted file]
scripts/gdb/linux/symbols.py
scripts/gdb/vmlinux-gdb.py
security/apparmor/lsm.c
sound/core/control.c
sound/core/pcm.c
sound/pci/hda/hda_intel.c
sound/pci/hda/patch_realtek.c
sound/usb/card.c
tools/hv/bondvf.sh [new file with mode: 0755]
tools/objtool/builtin-check.c
tools/perf/scripts/python/netdev-times.py
tools/testing/radix-tree/tag_check.c
tools/vm/slabinfo.c

index ca752db..d6c6e41 100644 (file)
@@ -20,6 +20,15 @@ Required properties:
       "brcm,bcm53018-srab"
       "brcm,bcm53019-srab" and the mandatory "brcm,bcm5301x-srab" string
 
+  For the BCM585xx/586XX/88312 SoCs with an integrated switch, must be one of:
+      "brcm,bcm58522-srab"
+      "brcm,bcm58523-srab"
+      "brcm,bcm58525-srab"
+      "brcm,bcm58622-srab"
+      "brcm,bcm58623-srab"
+      "brcm,bcm58625-srab"
+      "brcm,bcm88312-srab" and the mandatory "brcm,nsp-srab string
+
   For the BCM63xx/33xx SoCs with an integrated switch, must be one of:
       "brcm,bcm3384-switch"
       "brcm,bcm6328-switch"
diff --git a/Documentation/devicetree/bindings/net/hisilicon-femac-mdio.txt b/Documentation/devicetree/bindings/net/hisilicon-femac-mdio.txt
new file mode 100644 (file)
index 0000000..23a39a3
--- /dev/null
@@ -0,0 +1,22 @@
+Hisilicon Fast Ethernet MDIO Controller interface
+
+Required properties:
+- compatible: should be "hisilicon,hisi-femac-mdio".
+- reg: address and length of the register set for the device.
+- clocks: A phandle to the reference clock for this device.
+
+- PHY subnode: inherits from phy binding [1]
+[1] Documentation/devicetree/bindings/net/phy.txt
+
+Example:
+mdio: mdio@10091100 {
+       compatible = "hisilicon,hisi-femac-mdio";
+       reg = <0x10091100 0x10>;
+       clocks = <&crg HI3516CV300_MDIO_CLK>;
+       #address-cells = <1>;
+       #size-cells = <0>;
+
+       phy0: phy@1 {
+               reg = <1>;
+       };
+};
diff --git a/Documentation/devicetree/bindings/net/hisilicon-femac.txt b/Documentation/devicetree/bindings/net/hisilicon-femac.txt
new file mode 100644 (file)
index 0000000..d11af5e
--- /dev/null
@@ -0,0 +1,39 @@
+Hisilicon Fast Ethernet MAC controller
+
+Required properties:
+- compatible: should contain one of the following version strings:
+       * "hisilicon,hisi-femac-v1"
+       * "hisilicon,hisi-femac-v2"
+       and the soc string "hisilicon,hi3516cv300-femac".
+- reg: specifies base physical address(s) and size of the device registers.
+  The first region is the MAC core register base and size.
+  The second region is the global MAC control register.
+- interrupts: should contain the MAC interrupt.
+- clocks: A phandle to the MAC main clock.
+- resets: should contain the phandle to the MAC reset signal(required) and
+       the PHY reset signal(optional).
+- reset-names: should contain the reset signal name "mac"(required)
+       and "phy"(optional).
+- mac-address: see ethernet.txt [1].
+- phy-mode: see ethernet.txt [1].
+- phy-handle: see ethernet.txt [1].
+- hisilicon,phy-reset-delays-us: triplet of delays if PHY reset signal given.
+       The 1st cell is reset pre-delay in micro seconds.
+       The 2nd cell is reset pulse in micro seconds.
+       The 3rd cell is reset post-delay in micro seconds.
+
+[1] Documentation/devicetree/bindings/net/ethernet.txt
+
+Example:
+       hisi_femac: ethernet@10090000 {
+               compatible = "hisilicon,hi3516cv300-femac","hisilicon,hisi-femac-v2";
+               reg = <0x10090000 0x1000>,<0x10091300 0x200>;
+               interrupts = <12>;
+               clocks = <&crg HI3518EV200_ETH_CLK>;
+               resets = <&crg 0xec 0>,<&crg 0xec 3>;
+               reset-names = "mac","phy";
+               mac-address = [00 00 00 00 00 00];
+               phy-mode = "mii";
+               phy-handle = <&phy0>;
+               hisilicon,phy-reset-delays-us = <10000 20000 20000>;
+       };
index 87496a8..8d157f0 100644 (file)
@@ -35,3 +35,13 @@ Optional properties:
        supported clocks:
        - KSZ8021, KSZ8031, KSZ8081, KSZ8091: "rmii-ref": The RMII reference
          input clock. Used to determine the XI input clock.
+
+ - micrel,fiber-mode: If present the PHY is configured to operate in fiber mode
+
+       Some PHYs, such as the KSZ8041FTL variant, support fiber mode, enabled
+       by the FXEN boot strapping pin. It can't be determined from the PHY
+       registers whether the PHY is in fiber mode, so this boolean device tree
+       property can be used to describe it.
+
+       In fiber mode, auto-negotiation is disabled and the PHY can only work in
+       100base-fx (full and half duplex) modes.
index 72d82d6..2e68a3c 100644 (file)
@@ -17,9 +17,26 @@ Required properties:
 Optional properties:
 altr,emac-splitter: Should be the phandle to the emac splitter soft IP node if
                DWMAC controller is connected emac splitter.
+phy-mode: The phy mode the ethernet operates in
+altr,sgmii-to-sgmii-converter: phandle to the TSE SGMII converter
+
+This device node has additional phandle dependency, the sgmii converter:
+
+Required properties:
+ - compatible  : Should be altr,gmii-to-sgmii-2.0
+ - reg-names   : Should be "eth_tse_control_port"
 
 Example:
 
+gmii_to_sgmii_converter: phy@0x100000240 {
+       compatible = "altr,gmii-to-sgmii-2.0";
+       reg = <0x00000001 0x00000240 0x00000008>,
+               <0x00000001 0x00000200 0x00000040>;
+       reg-names = "eth_tse_control_port";
+       clocks = <&sgmii_1_clk_0 &emac1 1 &sgmii_clk_125 &sgmii_clk_125>;
+       clock-names = "tse_pcs_ref_clk_clock_connection", "tse_rx_cdr_refclk";
+};
+
 gmac0: ethernet@ff700000 {
        compatible = "altr,socfpga-stmmac", "snps,dwmac-3.70a", "snps,dwmac";
        altr,sysmgr-syscon = <&sysmgr 0x60 0>;
@@ -30,4 +47,6 @@ gmac0: ethernet@ff700000 {
        mac-address = [00 00 00 00 00 00];/* Filled in by U-Boot */
        clocks = <&emac_0_clk>;
        clock-names = "stmmaceth";
+       phy-mode = "sgmii";
+       altr,gmii-to-sgmii-converter = <&gmii_to_sgmii_converter>;
 };
index 4ab7d43..7050ce8 100644 (file)
@@ -139,27 +139,6 @@ Examples of using the Linux-provided gdb helpers
       start_comm = "swapper/2\000\000\000\000\000\000"
     }
 
- o Dig into a radix tree data structure, such as the IRQ descriptors:
-    (gdb) print (struct irq_desc)$lx_radix_tree_lookup(irq_desc_tree, 18)
-    $6 = {
-      irq_common_data = {
-        state_use_accessors = 67584,
-        handler_data = 0x0 <__vectors_start>,
-        msi_desc = 0x0 <__vectors_start>,
-        affinity = {{
-            bits = {65535}
-          }}
-      },
-      irq_data = {
-        mask = 0,
-        irq = 18,
-        hwirq = 27,
-        common = 0xee803d80,
-        chip = 0xc0eb0854 <gic_data>,
-        domain = 0xee808000,
-        parent_data = 0x0 <__vectors_start>,
-        chip_data = 0xc0eb0854 <gic_data>
-      } <... trimmed ...>
 
 List of commands and functions
 ------------------------------
index 9d219d8..0235ae6 100644 (file)
@@ -85,7 +85,8 @@ Socket Interface
 
   bind(fd, &sockaddr_in, ...)
         This binds the socket to a local IP address and port, and a
-        transport.
+        transport, if one has not already been selected via the
+       SO_RDS_TRANSPORT socket option
 
   sendmsg(fd, ...)
         Sends a message to the indicated recipient. The kernel will
@@ -146,6 +147,20 @@ Socket Interface
         operation. In this case, it would use RDS_CANCEL_SENT_TO to
         nuke any pending messages.
 
+  setsockopt(fd, SOL_RDS, SO_RDS_TRANSPORT, (int *)&transport ..)
+  getsockopt(fd, SOL_RDS, SO_RDS_TRANSPORT, (int *)&transport ..)
+       Set or read an integer defining  the underlying
+       encapsulating transport to be used for RDS packets on the
+       socket. When setting the option, integer argument may be
+       one of RDS_TRANS_TCP or RDS_TRANS_IB. When retrieving the
+       value, RDS_TRANS_NONE will be returned on an unbound socket.
+       This socket option may only be set exactly once on the socket,
+       prior to binding it via the bind(2) system call. Attempts to
+       set SO_RDS_TRANSPORT on a socket for which the transport has
+       been previously attached explicitly (by SO_RDS_TRANSPORT) or
+       implicitly (via bind(2)) will return an error of EOPNOTSUPP.
+       An attempt to set SO_RDS_TRANSPPORT to RDS_TRANS_NONE will
+       always return EINVAL.
 
 RDMA for RDS
 ============
@@ -350,4 +365,59 @@ The recv path
     handle CMSGs
     return to application
 
+Multipath RDS (mprds)
+=====================
+  Mprds is multipathed-RDS, primarily intended for RDS-over-TCP
+  (though the concept can be extended to other transports). The classical
+  implementation of RDS-over-TCP is implemented by demultiplexing multiple
+  PF_RDS sockets between any 2 endpoints (where endpoint == [IP address,
+  port]) over a single TCP socket between the 2 IP addresses involved. This
+  has the limitation that it ends up funneling multiple RDS flows over a
+  single TCP flow, thus it is
+  (a) upper-bounded to the single-flow bandwidth,
+  (b) suffers from head-of-line blocking for all the RDS sockets.
+
+  Better throughput (for a fixed small packet size, MTU) can be achieved
+  by having multiple TCP/IP flows per rds/tcp connection, i.e., multipathed
+  RDS (mprds).  Each such TCP/IP flow constitutes a path for the rds/tcp
+  connection. RDS sockets will be attached to a path based on some hash
+  (e.g., of local address and RDS port number) and packets for that RDS
+  socket will be sent over the attached path using TCP to segment/reassemble
+  RDS datagrams on that path.
+
+  Multipathed RDS is implemented by splitting the struct rds_connection into
+  a common (to all paths) part, and a per-path struct rds_conn_path. All
+  I/O workqs and reconnect threads are driven from the rds_conn_path.
+  Transports such as TCP that are multipath capable may then set up a
+  TPC socket per rds_conn_path, and this is managed by the transport via
+  the transport privatee cp_transport_data pointer.
+
+  Transports announce themselves as multipath capable by setting the
+  t_mp_capable bit during registration with the rds core module. When the
+  transport is multipath-capable, rds_sendmsg() hashes outgoing traffic
+  across multiple paths. The outgoing hash is computed based on the
+  local address and port that the PF_RDS socket is bound to.
+
+  Additionally, even if the transport is MP capable, we may be
+  peering with some node that does not support mprds, or supports
+  a different number of paths. As a result, the peering nodes need
+  to agree on the number of paths to be used for the connection.
+  This is done by sending out a control packet exchange before the
+  first data packet. The control packet exchange must have completed
+  prior to outgoing hash completion in rds_sendmsg() when the transport
+  is mutlipath capable.
+
+  The control packet is an RDS ping packet (i.e., packet to rds dest
+  port 0) with the ping packet having a rds extension header option  of
+  type RDS_EXTHDR_NPATHS, length 2 bytes, and the value is the
+  number of paths supported by the sender. The "probe" ping packet will
+  get sent from some reserved port, RDS_FLAG_PROBE_PORT (in <linux/rds.h>)
+  The receiver of a ping from RDS_FLAG_PROBE_PORT will thus immediately
+  be able to compute the min(sender_paths, rcvr_paths). The pong
+  sent in response to a probe-ping should contain the rcvr's npaths
+  when the rcvr is mprds-capable.
+
+  If the rcvr is not mprds-capable, the exthdr in the ping will be
+  ignored.  In this case the pong will not have any exthdrs, so the sender
+  of the probe-ping can default to single-path mprds.
 
index 5da679c..755dab8 100644 (file)
@@ -15,9 +15,9 @@ the use of higher priority ip rules (Policy Based Routing, PBR) to take
 precedence over the VRF device rules directing specific traffic as desired.
 
 In addition, VRF devices allow VRFs to be nested within namespaces. For
-example network namespaces provide separation of network interfaces at L1
-(Layer 1 separation), VLANs on the interfaces within a namespace provide
-L2 separation and then VRF devices provide L3 separation.
+example network namespaces provide separation of network interfaces at the
+device layer, VLANs on the interfaces within a namespace provide L2 separation
+and then VRF devices provide L3 separation.
 
 Design
 ------
@@ -37,21 +37,22 @@ are then enslaved to a VRF device:
                               +------+ +------+
 
 Packets received on an enslaved device and are switched to the VRF device
-using an rx_handler which gives the impression that packets flow through
-the VRF device. Similarly on egress routing rules are used to send packets
-to the VRF device driver before getting sent out the actual interface. This
-allows tcpdump on a VRF device to capture all packets into and out of the
-VRF as a whole.[1] Similarly, netfilter [2] and tc rules can be applied
-using the VRF device to specify rules that apply to the VRF domain as a whole.
+in the IPv4 and IPv6 processing stacks giving the impression that packets
+flow through the VRF device. Similarly on egress routing rules are used to
+send packets to the VRF device driver before getting sent out the actual
+interface. This allows tcpdump on a VRF device to capture all packets into
+and out of the VRF as a whole.[1] Similarly, netfilter[2] and tc rules can be
+applied using the VRF device to specify rules that apply to the VRF domain
+as a whole.
 
 [1] Packets in the forwarded state do not flow through the device, so those
     packets are not seen by tcpdump. Will revisit this limitation in a
     future release.
 
-[2] Iptables on ingress is limited to NF_INET_PRE_ROUTING only with skb->dev
-    set to real ingress device and egress is limited to NF_INET_POST_ROUTING.
-    Will revisit this limitation in a future release.
-
+[2] Iptables on ingress supports PREROUTING with skb->dev set to the real
+    ingress device and both INPUT and PREROUTING rules with skb->dev set to
+    the VRF device. For egress POSTROUTING and OUTPUT rules can be written
+    using either the VRF device or real egress device.
 
 Setup
 -----
@@ -59,23 +60,33 @@ Setup
    e.g, ip link add vrf-blue type vrf table 10
         ip link set dev vrf-blue up
 
-2. Rules are added that send lookups to the associated FIB table when the
-   iif or oif is the VRF device. e.g.,
+2. An l3mdev FIB rule directs lookups to the table associated with the device.
+   A single l3mdev rule is sufficient for all VRFs. The VRF device adds the
+   l3mdev rule for IPv4 and IPv6 when the first device is created with a
+   default preference of 1000. Users may delete the rule if desired and add
+   with a different priority or install per-VRF rules.
+
+   Prior to the v4.8 kernel iif and oif rules are needed for each VRF device:
        ip ru add oif vrf-blue table 10
        ip ru add iif vrf-blue table 10
 
-   Set the default route for the table (and hence default route for the VRF).
-   e.g, ip route add table 10 prohibit default
+3. Set the default route for the table (and hence default route for the VRF).
+       ip route add table 10 unreachable default
 
-3. Enslave L3 interfaces to a VRF device.
-   e.g,  ip link set dev eth1 master vrf-blue
+4. Enslave L3 interfaces to a VRF device.
+       ip link set dev eth1 master vrf-blue
 
    Local and connected routes for enslaved devices are automatically moved to
    the table associated with VRF device. Any additional routes depending on
-   the enslaved device will need to be reinserted following the enslavement.
+   the enslaved device are dropped and will need to be reinserted to the VRF
+   FIB table following the enslavement.
+
+   The IPv6 sysctl option keep_addr_on_down can be enabled to keep IPv6 global
+   addresses as VRF enslavement changes.
+       sysctl -w net.ipv6.conf.all.keep_addr_on_down=1
 
-4. Additional VRF routes are added to associated table.
-   e.g., ip route add table 10 ...
+5. Additional VRF routes are added to associated table.
+       ip route add table 10 ...
 
 
 Applications
@@ -87,39 +98,34 @@ VRF device:
 
 or to specify the output device using cmsg and IP_PKTINFO.
 
+TCP services running in the default VRF context (ie., not bound to any VRF
+device) can work across all VRF domains by enabling the tcp_l3mdev_accept
+sysctl option:
+    sysctl -w net.ipv4.tcp_l3mdev_accept=1
 
-Limitations
------------
-Index of original ingress interface is not available via cmsg. Will address
-soon.
+netfilter rules on the VRF device can be used to limit access to services
+running in the default VRF context as well.
+
+The default VRF does not have limited scope with respect to port bindings.
+That is, if a process does a wildcard bind to a port in the default VRF it
+owns the port across all VRF domains within the network namespace.
 
 ################################################################################
 
 Using iproute2 for VRFs
 =======================
-VRF devices do *not* have to start with 'vrf-'. That is a convention used here
-for emphasis of the device type, similar to use of 'br' in bridge names.
+iproute2 supports the vrf keyword as of v4.7. For backwards compatibility this
+section lists both commands where appropriate -- with the vrf keyword and the
+older form without it.
 
 1. Create a VRF
 
    To instantiate a VRF device and associate it with a table:
        $ ip link add dev NAME type vrf table ID
 
-   Remember to add the ip rules as well:
-       $ ip ru add oif NAME table 10
-       $ ip ru add iif NAME table 10
-       $ ip -6 ru add oif NAME table 10
-       $ ip -6 ru add iif NAME table 10
-
-   Without the rules route lookups are not directed to the table.
-
-   For example:
-   $ ip link add dev vrf-blue type vrf table 10
-   $ ip ru add pref 200 oif vrf-blue table 10
-   $ ip ru add pref 200 iif vrf-blue table 10
-   $ ip -6 ru add pref 200 oif vrf-blue table 10
-   $ ip -6 ru add pref 200 iif vrf-blue table 10
-
+   As of v4.8 the kernel supports the l3mdev FIB rule where a single rule
+   covers all VRFs. The l3mdev rule is created for IPv4 and IPv6 on first
+   device create.
 
 2. List VRFs
 
@@ -129,16 +135,16 @@ for emphasis of the device type, similar to use of 'br' in bridge names.
 
    For example:
    $ ip -d link show type vrf
-   11: vrf-mgmt: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
+   11: mgmt: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
        link/ether 72:b3:ba:91:e2:24 brd ff:ff:ff:ff:ff:ff promiscuity 0
        vrf table 1 addrgenmode eui64
-   12: vrf-red: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
+   12: red: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
        link/ether b6:6f:6e:f6:da:73 brd ff:ff:ff:ff:ff:ff promiscuity 0
        vrf table 10 addrgenmode eui64
-   13: vrf-blue: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
+   13: blue: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
        link/ether 36:62:e8:7d:bb:8c brd ff:ff:ff:ff:ff:ff promiscuity 0
        vrf table 66 addrgenmode eui64
-   14: vrf-green: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
+   14: green: <NOARP,MASTER,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
        link/ether e6:28:b8:63:70:bb brd ff:ff:ff:ff:ff:ff promiscuity 0
        vrf table 81 addrgenmode eui64
 
@@ -146,43 +152,44 @@ for emphasis of the device type, similar to use of 'br' in bridge names.
    Or in brief output:
 
    $ ip -br link show type vrf
-   vrf-mgmt         UP             72:b3:ba:91:e2:24 <NOARP,MASTER,UP,LOWER_UP>
-   vrf-red          UP             b6:6f:6e:f6:da:73 <NOARP,MASTER,UP,LOWER_UP>
-   vrf-blue         UP             36:62:e8:7d:bb:8c <NOARP,MASTER,UP,LOWER_UP>
-   vrf-green        UP             e6:28:b8:63:70:bb <NOARP,MASTER,UP,LOWER_UP>
+   mgmt         UP             72:b3:ba:91:e2:24 <NOARP,MASTER,UP,LOWER_UP>
+   red          UP             b6:6f:6e:f6:da:73 <NOARP,MASTER,UP,LOWER_UP>
+   blue         UP             36:62:e8:7d:bb:8c <NOARP,MASTER,UP,LOWER_UP>
+   green        UP             e6:28:b8:63:70:bb <NOARP,MASTER,UP,LOWER_UP>
 
 
 3. Assign a Network Interface to a VRF
 
    Network interfaces are assigned to a VRF by enslaving the netdevice to a
    VRF device:
-       $ ip link set dev NAME master VRF-NAME
+       $ ip link set dev NAME master NAME
 
    On enslavement connected and local routes are automatically moved to the
    table associated with the VRF device.
 
    For example:
-   $ ip link set dev eth0 master vrf-mgmt
+   $ ip link set dev eth0 master mgmt
 
 
 4. Show Devices Assigned to a VRF
 
    To show devices that have been assigned to a specific VRF add the master
    option to the ip command:
-       $ ip link show master VRF-NAME
+       $ ip link show vrf NAME
+       $ ip link show master NAME
 
    For example:
-   $ ip link show master vrf-red
-   3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vrf-red state UP mode DEFAULT group default qlen 1000
+   $ ip link show vrf red
+   3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master red state UP mode DEFAULT group default qlen 1000
        link/ether 02:00:00:00:02:02 brd ff:ff:ff:ff:ff:ff
-   4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vrf-red state UP mode DEFAULT group default qlen 1000
+   4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master red state UP mode DEFAULT group default qlen 1000
        link/ether 02:00:00:00:02:03 brd ff:ff:ff:ff:ff:ff
-   7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master vrf-red state DOWN mode DEFAULT group default qlen 1000
+   7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master red state DOWN mode DEFAULT group default qlen 1000
        link/ether 02:00:00:00:02:06 brd ff:ff:ff:ff:ff:ff
 
 
    Or using the brief output:
-   $ ip -br link show master vrf-red
+   $ ip -br link show vrf red
    eth1             UP             02:00:00:00:02:02 <BROADCAST,MULTICAST,UP,LOWER_UP>
    eth2             UP             02:00:00:00:02:03 <BROADCAST,MULTICAST,UP,LOWER_UP>
    eth5             DOWN           02:00:00:00:02:06 <BROADCAST,MULTICAST>
@@ -192,26 +199,28 @@ for emphasis of the device type, similar to use of 'br' in bridge names.
 
    To list neighbor entries associated with devices enslaved to a VRF device
    add the master option to the ip command:
-       $ ip [-6] neigh show master VRF-NAME
+       $ ip [-6] neigh show vrf NAME
+       $ ip [-6] neigh show master NAME
 
    For example:
-   $  ip neigh show master vrf-red
+   $  ip neigh show vrf red
    10.2.1.254 dev eth1 lladdr a6:d9:c7:4f:06:23 REACHABLE
    10.2.2.254 dev eth2 lladdr 5e:54:01:6a:ee:80 REACHABLE
 
-    $ ip -6 neigh show master vrf-red
-    2002:1::64 dev eth1 lladdr a6:d9:c7:4f:06:23 REACHABLE
+   $ ip -6 neigh show vrf red
+   2002:1::64 dev eth1 lladdr a6:d9:c7:4f:06:23 REACHABLE
 
 
 6. Show Addresses for a VRF
 
    To show addresses for interfaces associated with a VRF add the master
    option to the ip command:
-       $ ip addr show master VRF-NAME
+       $ ip addr show vrf NAME
+       $ ip addr show master NAME
 
    For example:
-   $ ip addr show master vrf-red
-   3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vrf-red state UP group default qlen 1000
+   $ ip addr show vrf red
+   3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master red state UP group default qlen 1000
        link/ether 02:00:00:00:02:02 brd ff:ff:ff:ff:ff:ff
        inet 10.2.1.2/24 brd 10.2.1.255 scope global eth1
           valid_lft forever preferred_lft forever
@@ -219,7 +228,7 @@ for emphasis of the device type, similar to use of 'br' in bridge names.
           valid_lft forever preferred_lft forever
        inet6 fe80::ff:fe00:202/64 scope link
           valid_lft forever preferred_lft forever
-   4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vrf-red state UP group default qlen 1000
+   4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master red state UP group default qlen 1000
        link/ether 02:00:00:00:02:03 brd ff:ff:ff:ff:ff:ff
        inet 10.2.2.2/24 brd 10.2.2.255 scope global eth2
           valid_lft forever preferred_lft forever
@@ -227,11 +236,11 @@ for emphasis of the device type, similar to use of 'br' in bridge names.
           valid_lft forever preferred_lft forever
        inet6 fe80::ff:fe00:203/64 scope link
           valid_lft forever preferred_lft forever
-   7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master vrf-red state DOWN group default qlen 1000
+   7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master red state DOWN group default qlen 1000
        link/ether 02:00:00:00:02:06 brd ff:ff:ff:ff:ff:ff
 
    Or in brief format:
-   $ ip -br addr show master vrf-red
+   $ ip -br addr show vrf red
    eth1             UP             10.2.1.2/24 2002:1::2/120 fe80::ff:fe00:202/64
    eth2             UP             10.2.2.2/24 2002:2::2/120 fe80::ff:fe00:203/64
    eth5             DOWN
@@ -241,10 +250,11 @@ for emphasis of the device type, similar to use of 'br' in bridge names.
 
    To show routes for a VRF use the ip command to display the table associated
    with the VRF device:
+       $ ip [-6] route show vrf NAME
        $ ip [-6] route show table ID
 
    For example:
-   $ ip route show table vrf-red
+   $ ip route show vrf red
    prohibit default
    broadcast 10.2.1.0 dev eth1  proto kernel  scope link  src 10.2.1.2
    10.2.1.0/24 dev eth1  proto kernel  scope link  src 10.2.1.2
@@ -255,7 +265,7 @@ for emphasis of the device type, similar to use of 'br' in bridge names.
    local 10.2.2.2 dev eth2  proto kernel  scope host  src 10.2.2.2
    broadcast 10.2.2.255 dev eth2  proto kernel  scope link  src 10.2.2.2
 
-   $ ip -6 route show table vrf-red
+   $ ip -6 route show vrf red
    local 2002:1:: dev lo  proto none  metric 0  pref medium
    local 2002:1::2 dev lo  proto none  metric 0  pref medium
    2002:1::/120 dev eth1  proto kernel  metric 256  pref medium
@@ -268,23 +278,24 @@ for emphasis of the device type, similar to use of 'br' in bridge names.
    local fe80::ff:fe00:203 dev lo  proto none  metric 0  pref medium
    fe80::/64 dev eth1  proto kernel  metric 256  pref medium
    fe80::/64 dev eth2  proto kernel  metric 256  pref medium
-   ff00::/8 dev vrf-red  metric 256  pref medium
+   ff00::/8 dev red  metric 256  pref medium
    ff00::/8 dev eth1  metric 256  pref medium
    ff00::/8 dev eth2  metric 256  pref medium
 
 
 8. Route Lookup for a VRF
 
-   A test route lookup can be done for a VRF by adding the oif option to ip:
-       $ ip [-6] route get oif VRF-NAME ADDRESS
+   A test route lookup can be done for a VRF:
+       $ ip [-6] route get vrf NAME ADDRESS
+       $ ip [-6] route get oif NAME ADDRESS
 
    For example:
-   $ ip route get 10.2.1.40 oif vrf-red
-   10.2.1.40 dev eth1  table vrf-red  src 10.2.1.2
+   $ ip route get 10.2.1.40 vrf red
+   10.2.1.40 dev eth1  table red  src 10.2.1.2
        cache
 
-   $ ip -6 route get 2002:1::32 oif vrf-red
-   2002:1::32 from :: dev eth1  table vrf-red  proto kernel  src 2002:1::2  metric 256  pref medium
+   $ ip -6 route get 2002:1::32 vrf red
+   2002:1::32 from :: dev eth1  table red  proto kernel  src 2002:1::2  metric 256  pref medium
 
 
 9. Removing Network Interface from a VRF
@@ -303,46 +314,40 @@ for emphasis of the device type, similar to use of 'br' in bridge names.
 
 Commands used in this example:
 
-cat >> /etc/iproute2/rt_tables <<EOF
-1  vrf-mgmt
-10 vrf-red
-66 vrf-blue
-81 vrf-green
+cat >> /etc/iproute2/rt_tables.d/vrf.conf <<EOF
+1  mgmt
+10 red
+66 blue
+81 green
 EOF
 
 function vrf_create
 {
     VRF=$1
     TBID=$2
-    # create VRF device
-    ip link add vrf-${VRF} type vrf table ${TBID}
 
-    # add rules that direct lookups to vrf table
-    ip ru add pref 200 oif vrf-${VRF} table ${TBID}
-    ip ru add pref 200 iif vrf-${VRF} table ${TBID}
-    ip -6 ru add pref 200 oif vrf-${VRF} table ${TBID}
-    ip -6 ru add pref 200 iif vrf-${VRF} table ${TBID}
+    # create VRF device
+    ip link add ${VRF} type vrf table ${TBID}
 
     if [ "${VRF}" != "mgmt" ]; then
-        ip route add table ${TBID} prohibit default
+        ip route add table ${TBID} unreachable default
     fi
-    ip link set dev vrf-${VRF} up
-    ip link set dev vrf-${VRF} state up
+    ip link set dev ${VRF} up
 }
 
 vrf_create mgmt 1
-ip link set dev eth0 master vrf-mgmt
+ip link set dev eth0 master mgmt
 
 vrf_create red 10
-ip link set dev eth1 master vrf-red
-ip link set dev eth2 master vrf-red
-ip link set dev eth5 master vrf-red
+ip link set dev eth1 master red
+ip link set dev eth2 master red
+ip link set dev eth5 master red
 
 vrf_create blue 66
-ip link set dev eth3 master vrf-blue
+ip link set dev eth3 master blue
 
 vrf_create green 81
-ip link set dev eth4 master vrf-green
+ip link set dev eth4 master green
 
 
 Interface addresses from /etc/network/interfaces:
index 1a5a121..85d0549 100644 (file)
@@ -45,7 +45,7 @@ is how we expect the compiler, application and kernel to work together.
    MPX-instrumented.
 3) The kernel detects that the CPU has MPX, allows the new prctl() to
    succeed, and notes the location of the bounds directory. Userspace is
-   expected to keep the bounds directory at that locationWe note it
+   expected to keep the bounds directory at that locationWe note it
    instead of reading it each time because the 'xsave' operation needed
    to access the bounds directory register is an expensive operation.
 4) If the application needs to spill bounds out of the 4 registers, it
@@ -167,7 +167,7 @@ If a #BR is generated due to a bounds violation caused by MPX.
 We need to decode MPX instructions to get violation address and
 set this address into extended struct siginfo.
 
-The _sigfault feild of struct siginfo is extended as follow:
+The _sigfault field of struct siginfo is extended as follow:
 
 87             /* SIGILL, SIGFPE, SIGSEGV, SIGBUS */
 88             struct {
@@ -240,5 +240,5 @@ them at the same bounds table.
 This is allowed architecturally.  See more information "Intel(R) Architecture
 Instruction Set Extensions Programming Reference" (9.3.4).
 
-However, if users did this, the kernel might be fooled in to unmaping an
+However, if users did this, the kernel might be fooled in to unmapping an
 in-use bounds table since it does not recognize sharing.
index 39d1723..6a0607b 100644 (file)
@@ -5,7 +5,7 @@ memory, it has two choices:
     from areas other than the one we are trying to flush will be
     destroyed and must be refilled later, at some cost.
  2. Use the invlpg instruction to invalidate a single page at a
-    time.  This could potentialy cost many more instructions, but
+    time.  This could potentially cost many more instructions, but
     it is a much more precise operation, causing no collateral
     damage to other TLB entries.
 
@@ -19,7 +19,7 @@ Which method to do depends on a few things:
     work.
  3. The size of the TLB.  The larger the TLB, the more collateral
     damage we do with a full flush.  So, the larger the TLB, the
-    more attrative an individual flush looks.  Data and
+    more attractive an individual flush looks.  Data and
     instructions have separate TLBs, as do different page sizes.
  4. The microarchitecture.  The TLB has become a multi-level
     cache on modern CPUs, and the global flushes have become more
index b1fb302..d0648a7 100644 (file)
@@ -36,7 +36,7 @@ between all CPUs.
 
 check_interval
        How often to poll for corrected machine check errors, in seconds
-       (Note output is hexademical). Default 5 minutes.  When the poller
+       (Note output is hexadecimal). Default 5 minutes.  When the poller
        finds MCEs it triggers an exponential speedup (poll more often) on
        the polling interval.  When the poller stops finding MCEs, it
        triggers an exponential backoff (poll less often) on the polling
index 6374be2..dc3481d 100644 (file)
@@ -1694,8 +1694,6 @@ S:        Maintained
 F:     drivers/edac/altera_edac.
 
 ARM/STI ARCHITECTURE
-M:     Srinivas Kandagatla <srinivas.kandagatla@gmail.com>
-M:     Maxime Coquelin <maxime.coquelin@st.com>
 M:     Patrice Chotard <patrice.chotard@st.com>
 L:     linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
 L:     kernel@stlinux.com
@@ -1728,6 +1726,7 @@ F:        drivers/ata/ahci_st.c
 
 ARM/STM32 ARCHITECTURE
 M:     Maxime Coquelin <mcoquelin.stm32@gmail.com>
+M:     Alexandre Torgue <alexandre.torgue@st.com>
 L:     linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
 S:     Maintained
 T:     git git://git.kernel.org/pub/scm/linux/kernel/git/mcoquelin/stm32.git
@@ -4486,7 +4485,7 @@ S:        Orphan
 F:     fs/efs/
 
 EHEA (IBM pSeries eHEA 10Gb ethernet adapter) DRIVER
-M:     Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
+M:     Douglas Miller <dougmill@linux.vnet.ibm.com>
 L:     netdev@vger.kernel.org
 S:     Maintained
 F:     drivers/net/ethernet/ibm/ehea/
@@ -7502,6 +7501,7 @@ Q:        http://patchwork.ozlabs.org/project/linux-mtd/list/
 T:     git git://git.infradead.org/linux-mtd.git
 T:     git git://git.infradead.org/l2-mtd.git
 S:     Maintained
+F:     Documentation/devicetree/bindings/mtd/
 F:     drivers/mtd/
 F:     include/linux/mtd/
 F:     include/uapi/mtd/
@@ -9710,7 +9710,6 @@ F:        Documentation/ABI/*/sysfs-driver-hid-roccat*
 
 ROCKER DRIVER
 M:     Jiri Pirko <jiri@resnulli.us>
-M:     Scott Feldman <sfeldma@gmail.com>
 L:     netdev@vger.kernel.org
 S:     Supported
 F:     drivers/net/ethernet/rocker/
index 0d50489..81b2262 100644 (file)
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
 VERSION = 4
 PATCHLEVEL = 7
 SUBLEVEL = 0
-EXTRAVERSION = -rc6
+EXTRAVERSION = -rc7
 NAME = Psychotic Stoned Sheep
 
 # *DOCUMENTATION*
index 8450944..22f7a13 100644 (file)
@@ -58,8 +58,8 @@
        soc {
                ranges = <MBUS_ID(0xf0, 0x01) 0 0xf1000000 0x100000
                          MBUS_ID(0x01, 0x1d) 0 0xfff00000 0x100000
-                         MBUS_ID(0x09, 0x09) 0 0xf1100000 0x10000
-                         MBUS_ID(0x09, 0x05) 0 0xf1110000 0x10000>;
+                         MBUS_ID(0x09, 0x19) 0 0xf1100000 0x10000
+                         MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000>;
 
                internal-regs {
 
index a03e56f..ca58eb2 100644 (file)
@@ -65,8 +65,9 @@
                        compatible = "allwinner,simple-framebuffer",
                                     "simple-framebuffer";
                        allwinner,pipeline = "de_be0-lcd0-hdmi";
-                       clocks = <&pll5 1>, <&ahb_gates 36>, <&ahb_gates 43>,
-                                <&ahb_gates 44>, <&dram_gates 26>;
+                       clocks = <&pll3>, <&pll5 1>, <&ahb_gates 36>,
+                                <&ahb_gates 43>, <&ahb_gates 44>,
+                                <&dram_gates 26>;
                        status = "disabled";
                };
 
@@ -74,8 +75,9 @@
                        compatible = "allwinner,simple-framebuffer",
                                     "simple-framebuffer";
                        allwinner,pipeline = "de_fe0-de_be0-lcd0-hdmi";
-                       clocks = <&pll5 1>, <&ahb_gates 36>, <&ahb_gates 43>,
-                                <&ahb_gates 44>, <&ahb_gates 46>,
+                       clocks = <&pll3>, <&pll5 1>, <&ahb_gates 36>,
+                                <&ahb_gates 43>, <&ahb_gates 44>,
+                                <&ahb_gates 46>,
                                 <&dram_gates 25>, <&dram_gates 26>;
                        status = "disabled";
                };
@@ -84,9 +86,9 @@
                        compatible = "allwinner,simple-framebuffer",
                                     "simple-framebuffer";
                        allwinner,pipeline = "de_fe0-de_be0-lcd0";
-                       clocks = <&pll5 1>, <&ahb_gates 36>, <&ahb_gates 44>,
-                                <&ahb_gates 46>, <&dram_gates 25>,
-                                <&dram_gates 26>;
+                       clocks = <&pll3>, <&pll5 1>, <&ahb_gates 36>,
+                                <&ahb_gates 44>, <&ahb_gates 46>,
+                                <&dram_gates 25>, <&dram_gates 26>;
                        status = "disabled";
                };
 
@@ -94,8 +96,9 @@
                        compatible = "allwinner,simple-framebuffer",
                                     "simple-framebuffer";
                        allwinner,pipeline = "de_fe0-de_be0-lcd0-tve0";
-                       clocks = <&pll5 1>, <&ahb_gates 34>, <&ahb_gates 36>,
-                                <&ahb_gates 44>, <&ahb_gates 46>,
+                       clocks = <&pll3>, <&pll5 1>, <&ahb_gates 34>,
+                                <&ahb_gates 36>, <&ahb_gates 44>,
+                                <&ahb_gates 46>,
                                 <&dram_gates 5>, <&dram_gates 25>, <&dram_gates 26>;
                        status = "disabled";
                };
index bddd0de..367f330 100644 (file)
@@ -65,8 +65,8 @@
                        compatible = "allwinner,simple-framebuffer",
                                     "simple-framebuffer";
                        allwinner,pipeline = "de_be0-lcd0-hdmi";
-                       clocks = <&pll5 1>, <&ahb_gates 36>, <&ahb_gates 43>,
-                                <&ahb_gates 44>;
+                       clocks = <&pll3>, <&pll5 1>, <&ahb_gates 36>,
+                                <&ahb_gates 43>, <&ahb_gates 44>;
                        status = "disabled";
                };
 
@@ -74,7 +74,8 @@
                        compatible = "allwinner,simple-framebuffer",
                                     "simple-framebuffer";
                        allwinner,pipeline = "de_be0-lcd0";
-                       clocks = <&pll5 1>, <&ahb_gates 36>, <&ahb_gates 44>;
+                       clocks = <&pll3>, <&pll5 1>, <&ahb_gates 36>,
+                                <&ahb_gates 44>;
                        status = "disabled";
                };
 
@@ -82,8 +83,8 @@
                        compatible = "allwinner,simple-framebuffer",
                                     "simple-framebuffer";
                        allwinner,pipeline = "de_be0-lcd0-tve0";
-                       clocks = <&pll5 1>, <&ahb_gates 34>, <&ahb_gates 36>,
-                                <&ahb_gates 44>;
+                       clocks = <&pll3>, <&pll5 1>, <&ahb_gates 34>,
+                                <&ahb_gates 36>, <&ahb_gates 44>;
                        status = "disabled";
                };
        };
index a8d8b45..f694482 100644 (file)
@@ -52,7 +52,7 @@
 
 / {
        model = "NextThing C.H.I.P.";
-       compatible = "nextthing,chip", "allwinner,sun5i-r8";
+       compatible = "nextthing,chip", "allwinner,sun5i-r8", "allwinner,sun5i-a13";
 
        aliases {
                i2c0 = &i2c0;
index febdf4c..2c34bbb 100644 (file)
@@ -67,8 +67,9 @@
                        compatible = "allwinner,simple-framebuffer",
                                     "simple-framebuffer";
                        allwinner,pipeline = "de_be0-lcd0-hdmi";
-                       clocks = <&pll5 1>, <&ahb_gates 36>, <&ahb_gates 43>,
-                                <&ahb_gates 44>, <&dram_gates 26>;
+                       clocks = <&pll3>, <&pll5 1>, <&ahb_gates 36>,
+                                <&ahb_gates 43>, <&ahb_gates 44>,
+                                <&dram_gates 26>;
                        status = "disabled";
                };
 
@@ -76,8 +77,8 @@
                        compatible = "allwinner,simple-framebuffer",
                                     "simple-framebuffer";
                        allwinner,pipeline = "de_be0-lcd0";
-                       clocks = <&pll5 1>, <&ahb_gates 36>, <&ahb_gates 44>,
-                                <&dram_gates 26>;
+                       clocks = <&pll3>, <&pll5 1>, <&ahb_gates 36>,
+                                <&ahb_gates 44>, <&dram_gates 26>;
                        status = "disabled";
                };
 
@@ -85,7 +86,7 @@
                        compatible = "allwinner,simple-framebuffer",
                                     "simple-framebuffer";
                        allwinner,pipeline = "de_be0-lcd0-tve0";
-                       clocks = <&pll5 1>,
+                       clocks = <&pll3>, <&pll5 1>,
                                 <&ahb_gates 34>, <&ahb_gates 36>, <&ahb_gates 44>,
                                 <&dram_gates 5>, <&dram_gates 26>;
                        status = "disabled";
                pll3x2: pll3x2_clk {
                        #clock-cells = <0>;
                        compatible = "fixed-factor-clock";
+                       clocks = <&pll3>;
                        clock-div = <1>;
                        clock-mult = <2>;
                        clock-output-names = "pll3-2x";
                pll7x2: pll7x2_clk {
                        #clock-cells = <0>;
                        compatible = "fixed-factor-clock";
+                       clocks = <&pll7>;
                        clock-div = <1>;
                        clock-mult = <2>;
                        clock-output-names = "pll7-2x";
index 1eca3b2..b6da15d 100644 (file)
 
                                ldo5_reg: ldo5 {
                                        regulator-name = "vddio_sdmmc,avdd_vdac";
-                                       regulator-min-microvolt = <3300000>;
+                                       regulator-min-microvolt = <1800000>;
                                        regulator-max-microvolt = <3300000>;
                                        regulator-always-on;
                                };
 
        sdhci@78000000 {
                status = "okay";
+               vqmmc-supply = <&ldo5_reg>;
                cd-gpios = <&gpio TEGRA_GPIO(I, 5) GPIO_ACTIVE_LOW>;
                wp-gpios = <&gpio TEGRA_GPIO(T, 3) GPIO_ACTIVE_HIGH>;
                power-gpios = <&gpio TEGRA_GPIO(D, 7) GPIO_ACTIVE_HIGH>;
index ecf9e0c..e53c6cf 100644 (file)
@@ -7,9 +7,15 @@ CFLAGS_pmsu.o                  := -march=armv7-a
 obj-$(CONFIG_MACH_MVEBU_ANY)    += system-controller.o mvebu-soc-id.o
 
 ifeq ($(CONFIG_MACH_MVEBU_V7),y)
-obj-y                           += cpu-reset.o board-v7.o coherency.o coherency_ll.o pmsu.o pmsu_ll.o pm.o pm-board.o
+obj-y                           += cpu-reset.o board-v7.o coherency.o coherency_ll.o pmsu.o pmsu_ll.o
+
+obj-$(CONFIG_PM)                += pm.o pm-board.o
 obj-$(CONFIG_SMP)               += platsmp.o headsmp.o platsmp-a9.o headsmp-a9.o
 endif
 
 obj-$(CONFIG_MACH_DOVE)                 += dove.o
-obj-$(CONFIG_MACH_KIRKWOOD)     += kirkwood.o kirkwood-pm.o
+
+ifeq ($(CONFIG_MACH_KIRKWOOD),y)
+obj-y                           += kirkwood.o
+obj-$(CONFIG_PM)                += kirkwood-pm.o
+endif
index 7e989d6..e80f0dd 100644 (file)
@@ -162,22 +162,16 @@ exit:
 }
 
 /*
- * This ioremap hook is used on Armada 375/38x to ensure that PCIe
- * memory areas are mapped as MT_UNCACHED instead of MT_DEVICE. This
- * is needed as a workaround for a deadlock issue between the PCIe
- * interface and the cache controller.
+ * This ioremap hook is used on Armada 375/38x to ensure that all MMIO
+ * areas are mapped as MT_UNCACHED instead of MT_DEVICE. This is
+ * needed for the HW I/O coherency mechanism to work properly without
+ * deadlock.
  */
 static void __iomem *
-armada_pcie_wa_ioremap_caller(phys_addr_t phys_addr, size_t size,
-                             unsigned int mtype, void *caller)
+armada_wa_ioremap_caller(phys_addr_t phys_addr, size_t size,
+                        unsigned int mtype, void *caller)
 {
-       struct resource pcie_mem;
-
-       mvebu_mbus_get_pcie_mem_aperture(&pcie_mem);
-
-       if (pcie_mem.start <= phys_addr && (phys_addr + size) <= pcie_mem.end)
-               mtype = MT_UNCACHED;
-
+       mtype = MT_UNCACHED;
        return __arm_ioremap_caller(phys_addr, size, mtype, caller);
 }
 
@@ -186,7 +180,8 @@ static void __init armada_375_380_coherency_init(struct device_node *np)
        struct device_node *cache_dn;
 
        coherency_cpu_base = of_iomap(np, 0);
-       arch_ioremap_caller = armada_pcie_wa_ioremap_caller;
+       arch_ioremap_caller = armada_wa_ioremap_caller;
+       pci_ioremap_set_mem_type(MT_UNCACHED);
 
        /*
         * We should switch the PL310 to I/O coherency mode only if
index 87e1985..9d9fd4b 100644 (file)
 #define APM_CPU_PART_POTENZA           0x000
 
 #define CAVIUM_CPU_PART_THUNDERX       0x0A1
+#define CAVIUM_CPU_PART_THUNDERX_81XX  0x0A2
 
 #define BRCM_CPU_PART_VULCAN           0x516
 
 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
 #define MIDR_THUNDERX  MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
 
 #ifndef __ASSEMBLY__
 
index a307eb6..7f94755 100644 (file)
@@ -117,6 +117,8 @@ struct pt_regs {
        };
        u64 orig_x0;
        u64 syscallno;
+       u64 orig_addr_limit;
+       u64 unused;     // maintain 16 byte alignment
 };
 
 #define arch_has_single_step() (1)
index f8e5d47..2f4ba77 100644 (file)
@@ -60,6 +60,7 @@ int main(void)
   DEFINE(S_PC,                 offsetof(struct pt_regs, pc));
   DEFINE(S_ORIG_X0,            offsetof(struct pt_regs, orig_x0));
   DEFINE(S_SYSCALLNO,          offsetof(struct pt_regs, syscallno));
+  DEFINE(S_ORIG_ADDR_LIMIT,    offsetof(struct pt_regs, orig_addr_limit));
   DEFINE(S_FRAME_SIZE,         sizeof(struct pt_regs));
   BLANK();
   DEFINE(MM_CONTEXT_ID,                offsetof(struct mm_struct, context.id.counter));
index d427894..af716b6 100644 (file)
@@ -98,6 +98,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
                MIDR_RANGE(MIDR_THUNDERX, 0x00,
                           (1 << MIDR_VARIANT_SHIFT) | 1),
        },
+       {
+       /* Cavium ThunderX, T81 pass 1.0 */
+               .desc = "Cavium erratum 27456",
+               .capability = ARM64_WORKAROUND_CAVIUM_27456,
+               MIDR_RANGE(MIDR_THUNDERX_81XX, 0x00, 0x00),
+       },
 #endif
        {
        }
index 12e8d2b..6c3b734 100644 (file)
@@ -28,6 +28,7 @@
 #include <asm/errno.h>
 #include <asm/esr.h>
 #include <asm/irq.h>
+#include <asm/memory.h>
 #include <asm/thread_info.h>
 #include <asm/unistd.h>
 
        mov     x29, xzr                        // fp pointed to user-space
        .else
        add     x21, sp, #S_FRAME_SIZE
-       .endif
+       get_thread_info tsk
+       /* Save the task's original addr_limit and set USER_DS (TASK_SIZE_64) */
+       ldr     x20, [tsk, #TI_ADDR_LIMIT]
+       str     x20, [sp, #S_ORIG_ADDR_LIMIT]
+       mov     x20, #TASK_SIZE_64
+       str     x20, [tsk, #TI_ADDR_LIMIT]
+       ALTERNATIVE(nop, SET_PSTATE_UAO(0), ARM64_HAS_UAO, CONFIG_ARM64_UAO)
+       .endif /* \el == 0 */
        mrs     x22, elr_el1
        mrs     x23, spsr_el1
        stp     lr, x21, [sp, #S_LR]
        .endm
 
        .macro  kernel_exit, el
+       .if     \el != 0
+       /* Restore the task's original addr_limit. */
+       ldr     x20, [sp, #S_ORIG_ADDR_LIMIT]
+       str     x20, [tsk, #TI_ADDR_LIMIT]
+
+       /* No need to restore UAO, it will be restored from SPSR_EL1 */
+       .endif
+
        ldp     x21, x22, [sp, #S_PC]           // load ELR, SPSR
        .if     \el == 0
        ct_user_enter
@@ -406,7 +422,6 @@ el1_irq:
        bl      trace_hardirqs_off
 #endif
 
-       get_thread_info tsk
        irq_handler
 
 #ifdef CONFIG_PREEMPT
index 013e2cb..b1166d1 100644 (file)
@@ -280,7 +280,8 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
        }
 
        if (permission_fault(esr) && (addr < USER_DS)) {
-               if (get_fs() == KERNEL_DS)
+               /* regs->orig_addr_limit may be 0 if we entered from EL0 */
+               if (regs->orig_addr_limit == KERNEL_DS)
                        die("Accessing user space memory with fs=KERNEL_DS", regs, esr);
 
                if (!search_exception_tables(regs->pc))
index 01d877c..cf3023d 100644 (file)
@@ -8,12 +8,13 @@
 
 #include <asm/processor.h>
 
-static void putc(char c);
+static void m32r_putc(char c);
 
 static int puts(const char *s)
 {
        char c;
-       while ((c = *s++)) putc(c);
+       while ((c = *s++))
+               m32r_putc(c);
        return 0;
 }
 
@@ -41,7 +42,7 @@ static int puts(const char *s)
 #define BOOT_SIO0TXB   PLD_ESIO0TXB
 #endif
 
-static void putc(char c)
+static void m32r_putc(char c)
 {
        while ((*BOOT_SIO0STS & 0x3) != 0x3)
                cpu_relax();
@@ -61,7 +62,7 @@ static void putc(char c)
 #define SIO0TXB        (volatile unsigned short *)(0x00efd000 + 30)
 #endif
 
-static void putc(char c)
+static void m32r_putc(char c)
 {
        while ((*SIO0STS & 0x1) == 0)
                cpu_relax();
index fa31be2..73d92ea 100644 (file)
@@ -288,7 +288,7 @@ _clear_bss:
 #endif
 
        /*
-        *      Assember start up done, start code proper.
+        *      Assembler start up done, start code proper.
         */
        jsr     start_kernel                    /* start Linux kernel */
 
index c525e4c..9abb1a4 100644 (file)
@@ -111,7 +111,7 @@ void __init config_BSP(char *commandp, int size)
 /***************************************************************************/
 
 /*
- * Some 5272 based boards have the FEC ethernet diectly connected to
+ * Some 5272 based boards have the FEC ethernet directly connected to
  * an ethernet switch. In this case we need to use the fixed phy type,
  * and we need to declare it early in boot.
  */
index 821de92..6a640be 100644 (file)
@@ -42,7 +42,7 @@ static unsigned long iospace;
 
 /*
  * We need to be carefull probing on bus 0 (directly connected to host
- * bridge). We should only acccess the well defined possible devices in
+ * bridge). We should only access the well defined possible devices in
  * use, ignore aliases and the like.
  */
 static unsigned char mcf_host_slot2sid[32] = {
index 3ee6976..8f5b6f7 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -359,6 +360,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -553,7 +555,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index e96787f..31bded9 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -341,6 +342,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -512,7 +514,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 083fe6b..0d7739e 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -350,6 +351,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -533,7 +535,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 475130c..2cbb5c4 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -340,6 +341,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -504,7 +506,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 4339658..96102a4 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -341,6 +342,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -514,7 +516,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 831cc8c..97d88f7 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -357,6 +358,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -536,7 +538,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 6377afe..be25ef2 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -390,6 +391,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -616,7 +618,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 4304b3d..a008344 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -339,6 +340,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -504,7 +506,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 074bda4..6735a25 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -340,6 +341,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -504,7 +506,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 07b9fa8..780c6e9 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -346,6 +347,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -527,7 +529,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 36e6fae..44693cf 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -337,6 +338,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -506,7 +508,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 903acf9..ef0071d 100644 (file)
@@ -9,6 +9,7 @@ CONFIG_LOG_BUF_SHIFT=16
 # CONFIG_PID_NS is not set
 # CONFIG_NET_NS is not set
 CONFIG_BLK_DEV_INITRD=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
 CONFIG_USERFAULTFD=y
 CONFIG_SLAB=y
 CONFIG_MODULES=y
@@ -337,6 +338,7 @@ CONFIG_MACVTAP=m
 CONFIG_IPVLAN=m
 CONFIG_VXLAN=m
 CONFIG_GENEVE=m
+CONFIG_GTP=m
 CONFIG_MACSEC=m
 CONFIG_NETCONSOLE=m
 CONFIG_NETCONSOLE_DYNAMIC=y
@@ -506,7 +508,9 @@ CONFIG_TEST_STRING_HELPERS=m
 CONFIG_TEST_KSTRTOX=m
 CONFIG_TEST_PRINTF=m
 CONFIG_TEST_BITMAP=m
+CONFIG_TEST_UUID=m
 CONFIG_TEST_RHASHTABLE=m
+CONFIG_TEST_HASH=m
 CONFIG_TEST_LKM=m
 CONFIG_TEST_USER_COPY=m
 CONFIG_TEST_BPF=m
index 78cb60f..9bbffeb 100644 (file)
@@ -10191,7 +10191,7 @@ xdnrm_con:
 xdnrm_sd:
        mov.l           %a1,-(%sp)
        tst.b           LOCAL_EX(%a0)           # is denorm pos or neg?
-       smi.b           %d1                     # set d0 accodingly
+       smi.b           %d1                     # set d0 accordingly
        bsr.l           unf_sub
        mov.l           (%sp)+,%a1
 xdnrm_exit:
@@ -10990,7 +10990,7 @@ src_qnan_m:
 # routines where an instruction is selected by an index into
 # a large jump table corresponding to a given instruction which
 # has been decoded. Flow continues here where we now decode
-# further accoding to the source operand type.
+# further according to the source operand type.
 #
 
        global          fsinh
@@ -23196,14 +23196,14 @@ m_sign:
 #
 #  1. Branch on the sign of the adjusted exponent.
 #  2p.(positive exp)
-#   2. Check M16 and the digits in lwords 2 and 3 in decending order.
+#   2. Check M16 and the digits in lwords 2 and 3 in descending order.
 #   3. Add one for each zero encountered until a non-zero digit.
 #   4. Subtract the count from the exp.
 #   5. Check if the exp has crossed zero in #3 above; make the exp abs
 #         and set SE.
 #      6. Multiply the mantissa by 10**count.
 #  2n.(negative exp)
-#   2. Check the digits in lwords 3 and 2 in decending order.
+#   2. Check the digits in lwords 3 and 2 in descending order.
 #   3. Add one for each zero encountered until a non-zero digit.
 #   4. Add the count to the exp.
 #   5. Check if the exp has crossed zero in #3 above; clear SE.
index 4aedef9..3535e6c 100644 (file)
@@ -13156,14 +13156,14 @@ m_sign:
 #
 #  1. Branch on the sign of the adjusted exponent.
 #  2p.(positive exp)
-#   2. Check M16 and the digits in lwords 2 and 3 in decending order.
+#   2. Check M16 and the digits in lwords 2 and 3 in descending order.
 #   3. Add one for each zero encountered until a non-zero digit.
 #   4. Subtract the count from the exp.
 #   5. Check if the exp has crossed zero in #3 above; make the exp abs
 #         and set SE.
 #      6. Multiply the mantissa by 10**count.
 #  2n.(negative exp)
-#   2. Check the digits in lwords 3 and 2 in decending order.
+#   2. Check the digits in lwords 3 and 2 in descending order.
 #   3. Add one for each zero encountered until a non-zero digit.
 #   4. Add the count to the exp.
 #   5. Check if the exp has crossed zero in #3 above; clear SE.
index 429fe26..208b4da 100644 (file)
@@ -18,7 +18,7 @@
  * AUG/22/2000 : added support for 32-bit Dual-Address-Mode (K) 2000
  *               Oliver Kamphenkel (O.Kamphenkel@tu-bs.de)
  *
- * AUG/25/2000 : addad support for 8, 16 and 32-bit Single-Address-Mode (K)2000
+ * AUG/25/2000 : added support for 8, 16 and 32-bit Single-Address-Mode (K)2000
  *               Oliver Kamphenkel (O.Kamphenkel@tu-bs.de)
  *
  * APR/18/2002 : added proper support for MCF5272 DMA controller.
index f186459..699f20c 100644 (file)
 /*
  *     I2C module.
  */
-#define MCFI2C_BASE0           (MCF_MBAR + 0x280)      /* Base addreess I2C0 */
+#define MCFI2C_BASE0           (MCF_MBAR + 0x280)      /* Base address I2C0 */
 #define MCFI2C_SIZE0           0x20                    /* Register set size */
 
-#define MCFI2C_BASE1           (MCF_MBAR2 + 0x440)     /* Base addreess I2C1 */
+#define MCFI2C_BASE1           (MCF_MBAR2 + 0x440)     /* Base address I2C1 */
 #define MCFI2C_SIZE1           0x20                    /* Register set size */
 
 /*
index 26cc3d5..8824236 100644 (file)
@@ -38,7 +38,7 @@
 /*
  *     MMU Operation register.
  */
-#define        MMUOR_UAA       0x00000001              /* Update allocatiom address */
+#define        MMUOR_UAA       0x00000001              /* Update allocation address */
 #define        MMUOR_ACC       0x00000002              /* TLB access */
 #define        MMUOR_RD        0x00000004              /* TLB access read */
 #define        MMUOR_WR        0x00000000              /* TLB access write */
index fc5b362..c48d21b 100644 (file)
@@ -1,6 +1,6 @@
 /*
  * Q40 master Chip Control
- * RTC stuff merged for compactnes..
+ * RTC stuff merged for compactness.
 */
 
 #ifndef _Q40_MASTER_H
index 4d2adfb..7990b6f 100644 (file)
@@ -60,7 +60,7 @@
  *
  * The host talks to the IOPs using a rather simple message-passing scheme via
  * a shared memory area in the IOP RAM. Each IOP has seven "channels"; each
- * channel is conneced to a specific software driver on the IOP. For example
+ * channel is connected to a specific software driver on the IOP. For example
  * on the SCC IOP there is one channel for each serial port. Each channel has
  * an incoming and and outgoing message queue with a depth of one.
  *
index 759679d..6d1e760 100644 (file)
@@ -130,7 +130,7 @@ do_fscc=0
        bfextu  %d2{#13,#3},%d0
 .endm
 
-| decode the 8bit diplacement from the brief extension word
+| decode the 8bit displacement from the brief extension word
 .macro fp_decode_disp8
        move.b  %d2,%d0
        ext.w   %d0
index f538167..7d44e88 100644 (file)
@@ -633,7 +633,7 @@ static inline struct page *pmd_page(pmd_t pmd)
 
 static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 {
-       pmd_val(pmd) = (pmd_val(pmd) & _PAGE_CHG_MASK) |
+       pmd_val(pmd) = (pmd_val(pmd) & (_PAGE_CHG_MASK | _PAGE_HUGE)) |
                       (pgprot_val(newprot) & ~_PAGE_CHG_MASK);
        return pmd;
 }
index a8e8321..92619cc 100644 (file)
@@ -979,12 +979,15 @@ static int perf_push_sample(struct perf_event *event, struct sf_raw_sample *sfr)
        struct pt_regs regs;
        struct perf_sf_sde_regs *sde_regs;
        struct perf_sample_data data;
-       struct perf_raw_record raw;
+       struct perf_raw_record raw = {
+               .frag = {
+                       .size = sfr->size,
+                       .data = sfr,
+               },
+       };
 
        /* Setup perf sample */
        perf_sample_data_init(&data, 0, event->hw.last_period);
-       raw.size = sfr->size;
-       raw.data = sfr;
        data.raw = &raw;
 
        /* Setup pt_regs to look like an CPU-measurement external interrupt
index feb90f6..72dea2f 100644 (file)
@@ -655,8 +655,12 @@ fail:
        }
 
        if (event->attr.sample_type & PERF_SAMPLE_RAW) {
-               raw.size = sizeof(u32) + ibs_data.size;
-               raw.data = ibs_data.data;
+               raw = (struct perf_raw_record){
+                       .frag = {
+                               .size = sizeof(u32) + ibs_data.size,
+                               .data = ibs_data.data,
+                       },
+               };
                data.raw = &raw;
        }
 
index 33787ee..91eac39 100644 (file)
@@ -263,7 +263,7 @@ static bool check_hw_exists(void)
 
 msr_fail:
        pr_cont("Broken PMU hardware detected, using software events only.\n");
-       pr_info("%sFailed to access perfctr msr (MSR %x is %Lx)\n",
+       printk("%sFailed to access perfctr msr (MSR %x is %Lx)\n",
                boot_cpu_has(X86_FEATURE_HYPERVISOR) ? KERN_INFO : KERN_ERR,
                reg, val_new);
 
@@ -2319,7 +2319,7 @@ void
 perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
 {
        struct stack_frame frame;
-       const void __user *fp;
+       const unsigned long __user *fp;
 
        if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
                /* TODO: We don't support guest os callchain now */
@@ -2332,7 +2332,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
        if (regs->flags & (X86_VM_MASK | PERF_EFLAGS_VM))
                return;
 
-       fp = (void __user *)regs->bp;
+       fp = (unsigned long __user *)regs->bp;
 
        perf_callchain_store(entry, regs->ip);
 
@@ -2345,16 +2345,17 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs
        pagefault_disable();
        while (entry->nr < entry->max_stack) {
                unsigned long bytes;
+
                frame.next_frame             = NULL;
                frame.return_address = 0;
 
-               if (!access_ok(VERIFY_READ, fp, 16))
+               if (!access_ok(VERIFY_READ, fp, sizeof(*fp) * 2))
                        break;
 
-               bytes = __copy_from_user_nmi(&frame.next_frame, fp, 8);
+               bytes = __copy_from_user_nmi(&frame.next_frame, fp, sizeof(*fp));
                if (bytes != 0)
                        break;
-               bytes = __copy_from_user_nmi(&frame.return_address, fp+8, 8);
+               bytes = __copy_from_user_nmi(&frame.return_address, fp + 1, sizeof(*fp));
                if (bytes != 0)
                        break;
 
index 3660b2c..06c2baa 100644 (file)
@@ -1,8 +1,8 @@
 obj-$(CONFIG_CPU_SUP_INTEL)            += core.o bts.o cqm.o
 obj-$(CONFIG_CPU_SUP_INTEL)            += ds.o knc.o
 obj-$(CONFIG_CPU_SUP_INTEL)            += lbr.o p4.o p6.o pt.o
-obj-$(CONFIG_PERF_EVENTS_INTEL_RAPL)   += intel-rapl.o
-intel-rapl-objs                                := rapl.o
+obj-$(CONFIG_PERF_EVENTS_INTEL_RAPL)   += intel-rapl-perf.o
+intel-rapl-perf-objs                   := rapl.o
 obj-$(CONFIG_PERF_EVENTS_INTEL_UNCORE) += intel-uncore.o
 intel-uncore-objs                      := uncore.o uncore_nhmex.o uncore_snb.o uncore_snbep.o
 obj-$(CONFIG_PERF_EVENTS_INTEL_CSTATE) += intel-cstate.o
index 7c66695..9b4f9d3 100644 (file)
@@ -115,6 +115,10 @@ static struct event_constraint intel_snb_event_constraints[] __read_mostly =
        INTEL_UEVENT_CONSTRAINT(0x04a3, 0xf), /* CYCLE_ACTIVITY.CYCLES_NO_DISPATCH */
        INTEL_UEVENT_CONSTRAINT(0x02a3, 0x4), /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */
 
+       /*
+        * When HT is off these events can only run on the bottom 4 counters
+        * When HT is on, they are impacted by the HT bug and require EXCL access
+        */
        INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf), /* MEM_UOPS_RETIRED.* */
        INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
        INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
@@ -139,6 +143,10 @@ static struct event_constraint intel_ivb_event_constraints[] __read_mostly =
        INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4), /* CYCLE_ACTIVITY.STALLS_L1D_PENDING */
        INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */
 
+       /*
+        * When HT is off these events can only run on the bottom 4 counters
+        * When HT is on, they are impacted by the HT bug and require EXCL access
+        */
        INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf), /* MEM_UOPS_RETIRED.* */
        INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
        INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
@@ -182,6 +190,16 @@ struct event_constraint intel_skl_event_constraints[] = {
        FIXED_EVENT_CONSTRAINT(0x003c, 1),      /* CPU_CLK_UNHALTED.CORE */
        FIXED_EVENT_CONSTRAINT(0x0300, 2),      /* CPU_CLK_UNHALTED.REF */
        INTEL_UEVENT_CONSTRAINT(0x1c0, 0x2),    /* INST_RETIRED.PREC_DIST */
+
+       /*
+        * when HT is off, these can only run on the bottom 4 counters
+        */
+       INTEL_EVENT_CONSTRAINT(0xd0, 0xf),      /* MEM_INST_RETIRED.* */
+       INTEL_EVENT_CONSTRAINT(0xd1, 0xf),      /* MEM_LOAD_RETIRED.* */
+       INTEL_EVENT_CONSTRAINT(0xd2, 0xf),      /* MEM_LOAD_L3_HIT_RETIRED.* */
+       INTEL_EVENT_CONSTRAINT(0xcd, 0xf),      /* MEM_TRANS_RETIRED.* */
+       INTEL_EVENT_CONSTRAINT(0xc6, 0xf),      /* FRONTEND_RETIRED.* */
+
        EVENT_CONSTRAINT_END
 };
 
@@ -250,6 +268,10 @@ static struct event_constraint intel_hsw_event_constraints[] = {
        /* CYCLE_ACTIVITY.CYCLES_NO_EXECUTE */
        INTEL_UEVENT_CONSTRAINT(0x04a3, 0xf),
 
+       /*
+        * When HT is off these events can only run on the bottom 4 counters
+        * When HT is on, they are impacted by the HT bug and require EXCL access
+        */
        INTEL_EXCLEVT_CONSTRAINT(0xd0, 0xf), /* MEM_UOPS_RETIRED.* */
        INTEL_EXCLEVT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
        INTEL_EXCLEVT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
@@ -264,6 +286,13 @@ struct event_constraint intel_bdw_event_constraints[] = {
        FIXED_EVENT_CONSTRAINT(0x0300, 2),      /* CPU_CLK_UNHALTED.REF */
        INTEL_UEVENT_CONSTRAINT(0x148, 0x4),    /* L1D_PEND_MISS.PENDING */
        INTEL_UBIT_EVENT_CONSTRAINT(0x8a3, 0x4),        /* CYCLE_ACTIVITY.CYCLES_L1D_MISS */
+       /*
+        * when HT is off, these can only run on the bottom 4 counters
+        */
+       INTEL_EVENT_CONSTRAINT(0xd0, 0xf),      /* MEM_INST_RETIRED.* */
+       INTEL_EVENT_CONSTRAINT(0xd1, 0xf),      /* MEM_LOAD_RETIRED.* */
+       INTEL_EVENT_CONSTRAINT(0xd2, 0xf),      /* MEM_LOAD_L3_HIT_RETIRED.* */
+       INTEL_EVENT_CONSTRAINT(0xcd, 0xf),      /* MEM_TRANS_RETIRED.* */
        EVENT_CONSTRAINT_END
 };
 
index 4a41348..c64b1e9 100644 (file)
 #define X86_BUG_FXSAVE_LEAK    X86_BUG(6) /* FXSAVE leaks FOP/FIP/FOP */
 #define X86_BUG_CLFLUSH_MONITOR        X86_BUG(7) /* AAI65, CLFLUSH required before MONITOR */
 #define X86_BUG_SYSRET_SS_ATTRS        X86_BUG(8) /* SYSRET doesn't fix up SS attrs */
-#define X86_BUG_NULL_SEG       X86_BUG(9) /* Nulling a selector preserves the base */
-#define X86_BUG_SWAPGS_FENCE   X86_BUG(10) /* SWAPGS without input dep on GS */
-
-
 #ifdef CONFIG_X86_32
 /*
  * 64-bit kernels don't use X86_BUG_ESPFIX.  Make the define conditional
  */
 #define X86_BUG_ESPFIX         X86_BUG(9) /* "" IRET to 16-bit SS corrupts ESP/RSP high bits */
 #endif
+#define X86_BUG_NULL_SEG       X86_BUG(10) /* Nulling a selector preserves the base */
+#define X86_BUG_SWAPGS_FENCE   X86_BUG(11) /* SWAPGS without input dep on GS */
 
 #endif /* _ASM_X86_CPUFEATURES_H */
index a147e67..e991d5c 100644 (file)
@@ -71,8 +71,8 @@ int amd_cache_northbridges(void)
        while ((misc = next_northbridge(misc, amd_nb_misc_ids)) != NULL)
                i++;
 
-       if (i == 0)
-               return 0;
+       if (!i)
+               return -ENODEV;
 
        nb = kzalloc(i * sizeof(struct amd_northbridge), GFP_KERNEL);
        if (!nb)
index bca14c8..57b7137 100644 (file)
 
 #include <linux/pci.h>
 #include <linux/acpi.h>
+#include <linux/delay.h>
+#include <linux/dmi.h>
 #include <linux/pci_ids.h>
+#include <linux/bcma/bcma.h>
+#include <linux/bcma/bcma_regs.h>
 #include <drm/i915_drm.h>
 #include <asm/pci-direct.h>
 #include <asm/dma.h>
@@ -21,6 +25,9 @@
 #include <asm/iommu.h>
 #include <asm/gart.h>
 #include <asm/irq_remapping.h>
+#include <asm/early_ioremap.h>
+
+#define dev_err(msg)  pr_err("pci 0000:%02x:%02x.%d: %s", bus, slot, func, msg)
 
 static void __init fix_hypertransport_config(int num, int slot, int func)
 {
@@ -75,6 +82,13 @@ static void __init nvidia_bugs(int num, int slot, int func)
 {
 #ifdef CONFIG_ACPI
 #ifdef CONFIG_X86_IO_APIC
+       /*
+        * Only applies to Nvidia root ports (bus 0) and not to
+        * Nvidia graphics cards with PCI ports on secondary buses.
+        */
+       if (num)
+               return;
+
        /*
         * All timer overrides on Nvidia are
         * wrong unless HPET is enabled.
@@ -590,6 +604,61 @@ static void __init force_disable_hpet(int num, int slot, int func)
 #endif
 }
 
+#define BCM4331_MMIO_SIZE      16384
+#define BCM4331_PM_CAP         0x40
+#define bcma_aread32(reg)      ioread32(mmio + 1 * BCMA_CORE_SIZE + reg)
+#define bcma_awrite32(reg, val)        iowrite32(val, mmio + 1 * BCMA_CORE_SIZE + reg)
+
+static void __init apple_airport_reset(int bus, int slot, int func)
+{
+       void __iomem *mmio;
+       u16 pmcsr;
+       u64 addr;
+       int i;
+
+       if (!dmi_match(DMI_SYS_VENDOR, "Apple Inc."))
+               return;
+
+       /* Card may have been put into PCI_D3hot by grub quirk */
+       pmcsr = read_pci_config_16(bus, slot, func, BCM4331_PM_CAP + PCI_PM_CTRL);
+
+       if ((pmcsr & PCI_PM_CTRL_STATE_MASK) != PCI_D0) {
+               pmcsr &= ~PCI_PM_CTRL_STATE_MASK;
+               write_pci_config_16(bus, slot, func, BCM4331_PM_CAP + PCI_PM_CTRL, pmcsr);
+               mdelay(10);
+
+               pmcsr = read_pci_config_16(bus, slot, func, BCM4331_PM_CAP + PCI_PM_CTRL);
+               if ((pmcsr & PCI_PM_CTRL_STATE_MASK) != PCI_D0) {
+                       dev_err("Cannot power up Apple AirPort card\n");
+                       return;
+               }
+       }
+
+       addr  =      read_pci_config(bus, slot, func, PCI_BASE_ADDRESS_0);
+       addr |= (u64)read_pci_config(bus, slot, func, PCI_BASE_ADDRESS_1) << 32;
+       addr &= PCI_BASE_ADDRESS_MEM_MASK;
+
+       mmio = early_ioremap(addr, BCM4331_MMIO_SIZE);
+       if (!mmio) {
+               dev_err("Cannot iomap Apple AirPort card\n");
+               return;
+       }
+
+       pr_info("Resetting Apple AirPort card (left enabled by EFI)\n");
+
+       for (i = 0; bcma_aread32(BCMA_RESET_ST) && i < 30; i++)
+               udelay(10);
+
+       bcma_awrite32(BCMA_RESET_CTL, BCMA_RESET_CTL_RESET);
+       bcma_aread32(BCMA_RESET_CTL);
+       udelay(1);
+
+       bcma_awrite32(BCMA_RESET_CTL, 0);
+       bcma_aread32(BCMA_RESET_CTL);
+       udelay(10);
+
+       early_iounmap(mmio, BCM4331_MMIO_SIZE);
+}
 
 #define QFLAG_APPLY_ONCE       0x1
 #define QFLAG_APPLIED          0x2
@@ -603,12 +672,6 @@ struct chipset {
        void (*f)(int num, int slot, int func);
 };
 
-/*
- * Only works for devices on the root bus. If you add any devices
- * not on bus 0 readd another loop level in early_quirks(). But
- * be careful because at least the Nvidia quirk here relies on
- * only matching on bus 0.
- */
 static struct chipset early_qrk[] __initdata = {
        { PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
          PCI_CLASS_BRIDGE_PCI, PCI_ANY_ID, QFLAG_APPLY_ONCE, nvidia_bugs },
@@ -638,9 +701,13 @@ static struct chipset early_qrk[] __initdata = {
         */
        { PCI_VENDOR_ID_INTEL, 0x0f00,
                PCI_CLASS_BRIDGE_HOST, PCI_ANY_ID, 0, force_disable_hpet},
+       { PCI_VENDOR_ID_BROADCOM, 0x4331,
+         PCI_CLASS_NETWORK_OTHER, PCI_ANY_ID, 0, apple_airport_reset},
        {}
 };
 
+static void __init early_pci_scan_bus(int bus);
+
 /**
  * check_dev_quirk - apply early quirks to a given PCI device
  * @num: bus number
@@ -649,7 +716,7 @@ static struct chipset early_qrk[] __initdata = {
  *
  * Check the vendor & device ID against the early quirks table.
  *
- * If the device is single function, let early_quirks() know so we don't
+ * If the device is single function, let early_pci_scan_bus() know so we don't
  * poke at this device again.
  */
 static int __init check_dev_quirk(int num, int slot, int func)
@@ -658,6 +725,7 @@ static int __init check_dev_quirk(int num, int slot, int func)
        u16 vendor;
        u16 device;
        u8 type;
+       u8 sec;
        int i;
 
        class = read_pci_config_16(num, slot, func, PCI_CLASS_DEVICE);
@@ -685,25 +753,36 @@ static int __init check_dev_quirk(int num, int slot, int func)
 
        type = read_pci_config_byte(num, slot, func,
                                    PCI_HEADER_TYPE);
+
+       if ((type & 0x7f) == PCI_HEADER_TYPE_BRIDGE) {
+               sec = read_pci_config_byte(num, slot, func, PCI_SECONDARY_BUS);
+               if (sec > num)
+                       early_pci_scan_bus(sec);
+       }
+
        if (!(type & 0x80))
                return -1;
 
        return 0;
 }
 
-void __init early_quirks(void)
+static void __init early_pci_scan_bus(int bus)
 {
        int slot, func;
 
-       if (!early_pci_allowed())
-               return;
-
        /* Poor man's PCI discovery */
-       /* Only scan the root bus */
        for (slot = 0; slot < 32; slot++)
                for (func = 0; func < 8; func++) {
                        /* Only probe function 0 on single fn devices */
-                       if (check_dev_quirk(0, slot, func))
+                       if (check_dev_quirk(bus, slot, func))
                                break;
                }
 }
+
+void __init early_quirks(void)
+{
+       if (!early_pci_allowed())
+               return;
+
+       early_pci_scan_bus(0);
+}
index 1b1110f..0493c17 100644 (file)
@@ -54,8 +54,8 @@ static int kasan_die_handler(struct notifier_block *self,
                             void *data)
 {
        if (val == DIE_GPF) {
-               pr_emerg("CONFIG_KASAN_INLINE enabled");
-               pr_emerg("GPF could be caused by NULL-ptr deref or user memory access");
+               pr_emerg("CONFIG_KASAN_INLINE enabled\n");
+               pr_emerg("GPF could be caused by NULL-ptr deref or user memory access\n");
        }
        return NOTIFY_OK;
 }
index b2a4e2a..3cd6983 100644 (file)
@@ -396,6 +396,7 @@ int __init pci_acpi_init(void)
                return -ENODEV;
 
        printk(KERN_INFO "PCI: Using ACPI for IRQ routing\n");
+       acpi_irq_penalty_init();
        pcibios_enable_irq = acpi_pci_irq_enable;
        pcibios_disable_irq = acpi_pci_irq_disable;
        x86_init.pci.init_irq = x86_init_noop;
index 009947d..f2b5e6a 100644 (file)
@@ -19,6 +19,7 @@
 #include <asm/mtrr.h>
 #include <asm/sections.h>
 #include <asm/suspend.h>
+#include <asm/tlbflush.h>
 
 /* Defined in hibernate_asm_64.S */
 extern asmlinkage __visible int restore_image(void);
@@ -28,6 +29,7 @@ extern asmlinkage __visible int restore_image(void);
  * kernel's text (this value is passed in the image header).
  */
 unsigned long restore_jump_address __visible;
+unsigned long jump_address_phys;
 
 /*
  * Value of the cr3 register from before the hibernation (this value is passed
@@ -37,7 +39,43 @@ unsigned long restore_cr3 __visible;
 
 pgd_t *temp_level4_pgt __visible;
 
-void *relocated_restore_code __visible;
+unsigned long relocated_restore_code __visible;
+
+static int set_up_temporary_text_mapping(void)
+{
+       pmd_t *pmd;
+       pud_t *pud;
+
+       /*
+        * The new mapping only has to cover the page containing the image
+        * kernel's entry point (jump_address_phys), because the switch over to
+        * it is carried out by relocated code running from a page allocated
+        * specifically for this purpose and covered by the identity mapping, so
+        * the temporary kernel text mapping is only needed for the final jump.
+        * Moreover, in that mapping the virtual address of the image kernel's
+        * entry point must be the same as its virtual address in the image
+        * kernel (restore_jump_address), so the image kernel's
+        * restore_registers() code doesn't find itself in a different area of
+        * the virtual address space after switching over to the original page
+        * tables used by the image kernel.
+        */
+       pud = (pud_t *)get_safe_page(GFP_ATOMIC);
+       if (!pud)
+               return -ENOMEM;
+
+       pmd = (pmd_t *)get_safe_page(GFP_ATOMIC);
+       if (!pmd)
+               return -ENOMEM;
+
+       set_pmd(pmd + pmd_index(restore_jump_address),
+               __pmd((jump_address_phys & PMD_MASK) | __PAGE_KERNEL_LARGE_EXEC));
+       set_pud(pud + pud_index(restore_jump_address),
+               __pud(__pa(pmd) | _KERNPG_TABLE));
+       set_pgd(temp_level4_pgt + pgd_index(restore_jump_address),
+               __pgd(__pa(pud) | _KERNPG_TABLE));
+
+       return 0;
+}
 
 static void *alloc_pgt_page(void *context)
 {
@@ -59,9 +97,10 @@ static int set_up_temporary_mappings(void)
        if (!temp_level4_pgt)
                return -ENOMEM;
 
-       /* It is safe to reuse the original kernel mapping */
-       set_pgd(temp_level4_pgt + pgd_index(__START_KERNEL_map),
-               init_level4_pgt[pgd_index(__START_KERNEL_map)]);
+       /* Prepare a temporary mapping for the kernel text */
+       result = set_up_temporary_text_mapping();
+       if (result)
+               return result;
 
        /* Set up the direct mapping from scratch */
        for (i = 0; i < nr_pfn_mapped; i++) {
@@ -78,19 +117,50 @@ static int set_up_temporary_mappings(void)
        return 0;
 }
 
+static int relocate_restore_code(void)
+{
+       pgd_t *pgd;
+       pud_t *pud;
+
+       relocated_restore_code = get_safe_page(GFP_ATOMIC);
+       if (!relocated_restore_code)
+               return -ENOMEM;
+
+       memcpy((void *)relocated_restore_code, &core_restore_code, PAGE_SIZE);
+
+       /* Make the page containing the relocated code executable */
+       pgd = (pgd_t *)__va(read_cr3()) + pgd_index(relocated_restore_code);
+       pud = pud_offset(pgd, relocated_restore_code);
+       if (pud_large(*pud)) {
+               set_pud(pud, __pud(pud_val(*pud) & ~_PAGE_NX));
+       } else {
+               pmd_t *pmd = pmd_offset(pud, relocated_restore_code);
+
+               if (pmd_large(*pmd)) {
+                       set_pmd(pmd, __pmd(pmd_val(*pmd) & ~_PAGE_NX));
+               } else {
+                       pte_t *pte = pte_offset_kernel(pmd, relocated_restore_code);
+
+                       set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_NX));
+               }
+       }
+       __flush_tlb_all();
+
+       return 0;
+}
+
 int swsusp_arch_resume(void)
 {
        int error;
 
        /* We have got enough memory and from now on we cannot recover */
-       if ((error = set_up_temporary_mappings()))
+       error = set_up_temporary_mappings();
+       if (error)
                return error;
 
-       relocated_restore_code = (void *)get_safe_page(GFP_ATOMIC);
-       if (!relocated_restore_code)
-               return -ENOMEM;
-       memcpy(relocated_restore_code, &core_restore_code,
-              &restore_registers - &core_restore_code);
+       error = relocate_restore_code();
+       if (error)
+               return error;
 
        restore_image();
        return 0;
@@ -109,11 +179,12 @@ int pfn_is_nosave(unsigned long pfn)
 
 struct restore_data_record {
        unsigned long jump_address;
+       unsigned long jump_address_phys;
        unsigned long cr3;
        unsigned long magic;
 };
 
-#define RESTORE_MAGIC  0x0123456789ABCDEFUL
+#define RESTORE_MAGIC  0x123456789ABCDEF0UL
 
 /**
  *     arch_hibernation_header_save - populate the architecture specific part
@@ -126,7 +197,8 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
 
        if (max_size < sizeof(struct restore_data_record))
                return -EOVERFLOW;
-       rdr->jump_address = restore_jump_address;
+       rdr->jump_address = (unsigned long)&restore_registers;
+       rdr->jump_address_phys = __pa_symbol(&restore_registers);
        rdr->cr3 = restore_cr3;
        rdr->magic = RESTORE_MAGIC;
        return 0;
@@ -142,6 +214,7 @@ int arch_hibernation_header_restore(void *addr)
        struct restore_data_record *rdr = addr;
 
        restore_jump_address = rdr->jump_address;
+       jump_address_phys = rdr->jump_address_phys;
        restore_cr3 = rdr->cr3;
        return (rdr->magic == RESTORE_MAGIC) ? 0 : -EINVAL;
 }
index 4400a43..3177c2b 100644 (file)
@@ -44,9 +44,6 @@ ENTRY(swsusp_arch_suspend)
        pushfq
        popq    pt_regs_flags(%rax)
 
-       /* save the address of restore_registers */
-       movq    $restore_registers, %rax
-       movq    %rax, restore_jump_address(%rip)
        /* save cr3 */
        movq    %cr3, %rax
        movq    %rax, restore_cr3(%rip)
@@ -57,31 +54,34 @@ ENTRY(swsusp_arch_suspend)
 ENDPROC(swsusp_arch_suspend)
 
 ENTRY(restore_image)
-       /* switch to temporary page tables */
-       movq    $__PAGE_OFFSET, %rdx
-       movq    temp_level4_pgt(%rip), %rax
-       subq    %rdx, %rax
-       movq    %rax, %cr3
-       /* Flush TLB */
-       movq    mmu_cr4_features(%rip), %rax
-       movq    %rax, %rdx
-       andq    $~(X86_CR4_PGE), %rdx
-       movq    %rdx, %cr4;  # turn off PGE
-       movq    %cr3, %rcx;  # flush TLB
-       movq    %rcx, %cr3;
-       movq    %rax, %cr4;  # turn PGE back on
-
        /* prepare to jump to the image kernel */
-       movq    restore_jump_address(%rip), %rax
-       movq    restore_cr3(%rip), %rbx
+       movq    restore_jump_address(%rip), %r8
+       movq    restore_cr3(%rip), %r9
+
+       /* prepare to switch to temporary page tables */
+       movq    temp_level4_pgt(%rip), %rax
+       movq    mmu_cr4_features(%rip), %rbx
 
        /* prepare to copy image data to their original locations */
        movq    restore_pblist(%rip), %rdx
+
+       /* jump to relocated restore code */
        movq    relocated_restore_code(%rip), %rcx
        jmpq    *%rcx
 
        /* code below has been relocated to a safe page */
 ENTRY(core_restore_code)
+       /* switch to temporary page tables */
+       movq    $__PAGE_OFFSET, %rcx
+       subq    %rcx, %rax
+       movq    %rax, %cr3
+       /* flush TLB */
+       movq    %rbx, %rcx
+       andq    $~(X86_CR4_PGE), %rcx
+       movq    %rcx, %cr4;  # turn off PGE
+       movq    %cr3, %rcx;  # flush TLB
+       movq    %rcx, %cr3;
+       movq    %rbx, %cr4;  # turn PGE back on
 .Lloop:
        testq   %rdx, %rdx
        jz      .Ldone
@@ -96,24 +96,17 @@ ENTRY(core_restore_code)
        /* progress to the next pbe */
        movq    pbe_next(%rdx), %rdx
        jmp     .Lloop
+
 .Ldone:
        /* jump to the restore_registers address from the image header */
-       jmpq    *%rax
-       /*
-        * NOTE: This assumes that the boot kernel's text mapping covers the
-        * image kernel's page containing restore_registers and the address of
-        * this page is the same as in the image kernel's text mapping (it
-        * should always be true, because the text mapping is linear, starting
-        * from 0, and is supposed to cover the entire kernel text for every
-        * kernel).
-        *
-        * code below belongs to the image kernel
-        */
+       jmpq    *%r8
 
+        /* code below belongs to the image kernel */
+       .align PAGE_SIZE
 ENTRY(restore_registers)
        FRAME_BEGIN
        /* go back to the original page tables */
-       movq    %rbx, %cr3
+       movq    %r9, %cr3
 
        /* Flush TLB, including "global" things (vmalloc) */
        movq    mmu_cr4_features(%rip), %rax
index cc7800e..01b8116 100644 (file)
@@ -150,8 +150,10 @@ static int get_task_ioprio(struct task_struct *p)
        if (ret)
                goto out;
        ret = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, IOPRIO_NORM);
+       task_lock(p);
        if (p->io_context)
                ret = p->io_context->ioprio;
+       task_unlock(p);
 out:
        return ret;
 }
index 6a76d5c..9492e1c 100644 (file)
@@ -124,5 +124,10 @@ int mscode_note_digest(void *context, size_t hdrlen,
        struct pefile_context *ctx = context;
 
        ctx->digest = kmemdup(value, vlen, GFP_KERNEL);
-       return ctx->digest ? 0 : -ENOMEM;
+       if (!ctx->digest)
+               return -ENOMEM;
+
+       ctx->digest_len = vlen;
+
+       return 0;
 }
index 44b746e..2ffd697 100644 (file)
@@ -227,7 +227,7 @@ static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7,
                                if (asymmetric_key_id_same(p->id, auth))
                                        goto found_issuer_check_skid;
                        }
-               } else {
+               } else if (sig->auth_ids[1]) {
                        auth = sig->auth_ids[1];
                        pr_debug("- want %*phN\n", auth->len, auth->data);
                        for (p = pkcs7->certs; p; p = p->next) {
index ac4bddf..19d1afb 100644 (file)
@@ -87,7 +87,7 @@ int restrict_link_by_signature(struct key *trust_keyring,
 
        sig = payload->data[asym_auth];
        if (!sig->auth_ids[0] && !sig->auth_ids[1])
-               return 0;
+               return -ENOKEY;
 
        if (ca_keyid && !asymmetric_key_id_partial(sig->auth_ids[1], ca_keyid))
                return -EPERM;
index ead8dc0..8ba4266 100644 (file)
@@ -102,10 +102,10 @@ struct pkcs1pad_inst_ctx {
 };
 
 struct pkcs1pad_request {
-       struct akcipher_request child_req;
-
        struct scatterlist in_sg[3], out_sg[2];
        uint8_t *in_buf, *out_buf;
+
+       struct akcipher_request child_req;
 };
 
 static int pkcs1pad_set_pub_key(struct crypto_akcipher *tfm, const void *key,
index 1f41284..dee8692 100644 (file)
@@ -602,7 +602,7 @@ static int acpi_aml_read_user(char __user *buf, int len)
        crc->tail = (crc->tail + n) & (ACPI_AML_BUF_SIZE - 1);
        ret = n;
 out:
-       acpi_aml_unlock_fifo(ACPI_AML_OUT_USER, !ret);
+       acpi_aml_unlock_fifo(ACPI_AML_OUT_USER, ret >= 0);
        return ret;
 }
 
@@ -672,7 +672,7 @@ static int acpi_aml_write_user(const char __user *buf, int len)
        crc->head = (crc->head + n) & (ACPI_AML_BUF_SIZE - 1);
        ret = n;
 out:
-       acpi_aml_unlock_fifo(ACPI_AML_IN_USER, !ret);
+       acpi_aml_unlock_fifo(ACPI_AML_IN_USER, ret >= 0);
        return n;
 }
 
index 21932d6..a1d177d 100644 (file)
@@ -108,9 +108,7 @@ acpi_ex_add_table(u32 table_index,
 
        /* Add the table to the namespace */
 
-       acpi_ex_exit_interpreter();
        status = acpi_ns_load_table(table_index, parent_node);
-       acpi_ex_enter_interpreter();
        if (ACPI_FAILURE(status)) {
                acpi_ut_remove_reference(obj_desc);
                *ddb_handle = NULL;
index 1783cd7..f631a47 100644 (file)
@@ -47,7 +47,6 @@
 #include "acparser.h"
 #include "acdispat.h"
 #include "actables.h"
-#include "acinterp.h"
 
 #define _COMPONENT          ACPI_NAMESPACE
 ACPI_MODULE_NAME("nsparse")
@@ -171,8 +170,6 @@ acpi_ns_parse_table(u32 table_index, struct acpi_namespace_node *start_node)
 
        ACPI_FUNCTION_TRACE(ns_parse_table);
 
-       acpi_ex_enter_interpreter();
-
        /*
         * AML Parse, pass 1
         *
@@ -188,7 +185,7 @@ acpi_ns_parse_table(u32 table_index, struct acpi_namespace_node *start_node)
        status = acpi_ns_one_complete_parse(ACPI_IMODE_LOAD_PASS1,
                                            table_index, start_node);
        if (ACPI_FAILURE(status)) {
-               goto error_exit;
+               return_ACPI_STATUS(status);
        }
 
        /*
@@ -204,10 +201,8 @@ acpi_ns_parse_table(u32 table_index, struct acpi_namespace_node *start_node)
        status = acpi_ns_one_complete_parse(ACPI_IMODE_LOAD_PASS2,
                                            table_index, start_node);
        if (ACPI_FAILURE(status)) {
-               goto error_exit;
+               return_ACPI_STATUS(status);
        }
 
-error_exit:
-       acpi_ex_exit_interpreter();
        return_ACPI_STATUS(status);
 }
index 73c76d6..290d6f5 100644 (file)
@@ -1331,8 +1331,6 @@ static int ec_install_handlers(struct acpi_ec *ec)
 
 static void ec_remove_handlers(struct acpi_ec *ec)
 {
-       acpi_ec_stop(ec, false);
-
        if (test_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags)) {
                if (ACPI_FAILURE(acpi_remove_address_space_handler(ec->handle,
                                        ACPI_ADR_SPACE_EC, &acpi_ec_space_handler)))
@@ -1340,6 +1338,19 @@ static void ec_remove_handlers(struct acpi_ec *ec)
                clear_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags);
        }
 
+       /*
+        * Stops handling the EC transactions after removing the operation
+        * region handler. This is required because _REG(DISCONNECT)
+        * invoked during the removal can result in new EC transactions.
+        *
+        * Flushes the EC requests and thus disables the GPE before
+        * removing the GPE handler. This is required by the current ACPICA
+        * GPE core. ACPICA GPE core will automatically disable a GPE when
+        * it is indicated but there is no way to handle it. So the drivers
+        * must disable the GPEs prior to removing the GPE handlers.
+        */
+       acpi_ec_stop(ec, false);
+
        if (test_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags)) {
                if (ACPI_FAILURE(acpi_remove_gpe_handler(NULL, ec->gpe,
                                        &acpi_ec_gpe_handler)))
index ac6ddcc..1f0e060 100644 (file)
@@ -1131,11 +1131,11 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
 
        /*
         * Until standardization materializes we need to consider up to 3
-        * different command sets.  Note, that checking for zero functions
-        * tells us if any commands might be reachable through this uuid.
+        * different command sets.  Note, that checking for function0 (bit0)
+        * tells us if any commands are reachable through this uuid.
         */
        for (i = NVDIMM_FAMILY_INTEL; i <= NVDIMM_FAMILY_HPE2; i++)
-               if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 0))
+               if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1))
                        break;
 
        /* limit the supported commands to those that are publicly documented */
@@ -1151,9 +1151,10 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
                if (disable_vendor_specific)
                        dsm_mask &= ~(1 << 8);
        } else {
-               dev_err(dev, "unknown dimm command family\n");
+               dev_dbg(dev, "unknown dimm command family\n");
                nfit_mem->family = -1;
-               return force_enable_dimms ? 0 : -ENODEV;
+               /* DSMs are optional, continue loading the driver... */
+               return 0;
        }
 
        uuid = to_nfit_uuid(nfit_mem->family);
index 4ed4061..c983bf7 100644 (file)
@@ -470,6 +470,7 @@ static int acpi_irq_pci_sharing_penalty(int irq)
 {
        struct acpi_pci_link *link;
        int penalty = 0;
+       int i;
 
        list_for_each_entry(link, &acpi_link_list, list) {
                /*
@@ -478,18 +479,14 @@ static int acpi_irq_pci_sharing_penalty(int irq)
                 */
                if (link->irq.active && link->irq.active == irq)
                        penalty += PIRQ_PENALTY_PCI_USING;
-               else {
-                       int i;
-
-                       /*
-                        * If a link is inactive, penalize the IRQs it
-                        * might use, but not as severely.
-                        */
-                       for (i = 0; i < link->irq.possible_count; i++)
-                               if (link->irq.possible[i] == irq)
-                                       penalty += PIRQ_PENALTY_PCI_POSSIBLE /
-                                               link->irq.possible_count;
-               }
+
+               /*
+                * penalize the IRQs PCI might use, but not as severely.
+                */
+               for (i = 0; i < link->irq.possible_count; i++)
+                       if (link->irq.possible[i] == irq)
+                               penalty += PIRQ_PENALTY_PCI_POSSIBLE /
+                                       link->irq.possible_count;
        }
 
        return penalty;
@@ -499,9 +496,6 @@ static int acpi_irq_get_penalty(int irq)
 {
        int penalty = 0;
 
-       if (irq < ACPI_MAX_ISA_IRQS)
-               penalty += acpi_isa_irq_penalty[irq];
-
        /*
        * Penalize IRQ used by ACPI SCI. If ACPI SCI pin attributes conflict
        * with PCI IRQ attributes, mark ACPI SCI as ISA_ALWAYS so it won't be
@@ -516,10 +510,49 @@ static int acpi_irq_get_penalty(int irq)
                        penalty += PIRQ_PENALTY_PCI_USING;
        }
 
+       if (irq < ACPI_MAX_ISA_IRQS)
+               return penalty + acpi_isa_irq_penalty[irq];
+
        penalty += acpi_irq_pci_sharing_penalty(irq);
        return penalty;
 }
 
+int __init acpi_irq_penalty_init(void)
+{
+       struct acpi_pci_link *link;
+       int i;
+
+       /*
+        * Update penalties to facilitate IRQ balancing.
+        */
+       list_for_each_entry(link, &acpi_link_list, list) {
+
+               /*
+                * reflect the possible and active irqs in the penalty table --
+                * useful for breaking ties.
+                */
+               if (link->irq.possible_count) {
+                       int penalty =
+                           PIRQ_PENALTY_PCI_POSSIBLE /
+                           link->irq.possible_count;
+
+                       for (i = 0; i < link->irq.possible_count; i++) {
+                               if (link->irq.possible[i] < ACPI_MAX_ISA_IRQS)
+                                       acpi_isa_irq_penalty[link->irq.
+                                                        possible[i]] +=
+                                           penalty;
+                       }
+
+               } else if (link->irq.active &&
+                               (link->irq.active < ACPI_MAX_ISA_IRQS)) {
+                       acpi_isa_irq_penalty[link->irq.active] +=
+                           PIRQ_PENALTY_PCI_POSSIBLE;
+               }
+       }
+
+       return 0;
+}
+
 static int acpi_irq_balance = -1;      /* 0: static, 1: balance */
 
 static int acpi_pci_link_allocate(struct acpi_pci_link *link)
index b4de130..22c0995 100644 (file)
@@ -680,6 +680,9 @@ bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, u64 rev, u64 funcs)
        u64 mask = 0;
        union acpi_object *obj;
 
+       if (funcs == 0)
+               return false;
+
        obj = acpi_evaluate_dsm(handle, uuid, rev, 0, NULL);
        if (!obj)
                return false;
@@ -692,9 +695,6 @@ bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, u64 rev, u64 funcs)
                        mask |= (((u64)obj->buffer.pointer[i]) << (i * 8));
        ACPI_FREE(obj);
 
-       if (funcs == 0)
-               return true;
-
        /*
         * Bit 0 indicates whether there's support for any functions other than
         * function 0 for the specified UUID and revision.
index 6be7770..31c183a 100644 (file)
@@ -4314,6 +4314,12 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
         */
        { "ST380013AS",         "3.20",         ATA_HORKAGE_MAX_SEC_1024 },
 
+       /*
+        * Device times out with higher max sects.
+        * https://bugzilla.kernel.org/show_bug.cgi?id=121671
+        */
+       { "LITEON CX1-JB256-HP", NULL,          ATA_HORKAGE_MAX_SEC_1024 },
+
        /* Devices we expect to fail diagnostics */
 
        /* Devices where NCQ should be avoided */
index ddc4ceb..700ed15 100644 (file)
@@ -874,7 +874,8 @@ static scq_info *get_scq(ns_dev *card, int size, u32 scd)
        scq->skb = kmalloc(sizeof(struct sk_buff *) *
                           (size / NS_SCQE_SIZE), GFP_KERNEL);
        if (!scq->skb) {
-               kfree(scq->org);
+               dma_free_coherent(&card->pcidev->dev,
+                                 2 * size, scq->org, scq->dma);
                kfree(scq);
                return NULL;
        }
index eda0909..f642c42 100644 (file)
@@ -8,8 +8,6 @@
 #include <linux/bcma/bcma.h>
 #include <linux/delay.h>
 
-#define BCMA_CORE_SIZE         0x1000
-
 #define bcma_err(bus, fmt, ...) \
        pr_err("bus%d: " fmt, (bus)->num, ##__VA_ARGS__)
 #define bcma_warn(bus, fmt, ...) \
index 2e6d1e9..fcc5b4e 100644 (file)
@@ -207,6 +207,9 @@ struct blkfront_info
        struct blk_mq_tag_set tag_set;
        struct blkfront_ring_info *rinfo;
        unsigned int nr_rings;
+       /* Save uncomplete reqs and bios for migration. */
+       struct list_head requests;
+       struct bio_list bio_list;
 };
 
 static unsigned int nr_minors;
@@ -2002,69 +2005,22 @@ static int blkif_recover(struct blkfront_info *info)
 {
        unsigned int i, r_index;
        struct request *req, *n;
-       struct blk_shadow *copy;
        int rc;
        struct bio *bio, *cloned_bio;
-       struct bio_list bio_list, merge_bio;
        unsigned int segs, offset;
        int pending, size;
        struct split_bio *split_bio;
-       struct list_head requests;
 
        blkfront_gather_backend_features(info);
        segs = info->max_indirect_segments ? : BLKIF_MAX_SEGMENTS_PER_REQUEST;
        blk_queue_max_segments(info->rq, segs);
-       bio_list_init(&bio_list);
-       INIT_LIST_HEAD(&requests);
 
        for (r_index = 0; r_index < info->nr_rings; r_index++) {
-               struct blkfront_ring_info *rinfo;
-
-               rinfo = &info->rinfo[r_index];
-               /* Stage 1: Make a safe copy of the shadow state. */
-               copy = kmemdup(rinfo->shadow, sizeof(rinfo->shadow),
-                              GFP_NOIO | __GFP_REPEAT | __GFP_HIGH);
-               if (!copy)
-                       return -ENOMEM;
-
-               /* Stage 2: Set up free list. */
-               memset(&rinfo->shadow, 0, sizeof(rinfo->shadow));
-               for (i = 0; i < BLK_RING_SIZE(info); i++)
-                       rinfo->shadow[i].req.u.rw.id = i+1;
-               rinfo->shadow_free = rinfo->ring.req_prod_pvt;
-               rinfo->shadow[BLK_RING_SIZE(info)-1].req.u.rw.id = 0x0fffffff;
+               struct blkfront_ring_info *rinfo = &info->rinfo[r_index];
 
                rc = blkfront_setup_indirect(rinfo);
-               if (rc) {
-                       kfree(copy);
+               if (rc)
                        return rc;
-               }
-
-               for (i = 0; i < BLK_RING_SIZE(info); i++) {
-                       /* Not in use? */
-                       if (!copy[i].request)
-                               continue;
-
-                       /*
-                        * Get the bios in the request so we can re-queue them.
-                        */
-                       if (copy[i].request->cmd_flags &
-                           (REQ_FLUSH | REQ_FUA | REQ_DISCARD | REQ_SECURE)) {
-                               /*
-                                * Flush operations don't contain bios, so
-                                * we need to requeue the whole request
-                                */
-                               list_add(&copy[i].request->queuelist, &requests);
-                               continue;
-                       }
-                       merge_bio.head = copy[i].request->bio;
-                       merge_bio.tail = copy[i].request->biotail;
-                       bio_list_merge(&bio_list, &merge_bio);
-                       copy[i].request->bio = NULL;
-                       blk_end_request_all(copy[i].request, 0);
-               }
-
-               kfree(copy);
        }
        xenbus_switch_state(info->xbdev, XenbusStateConnected);
 
@@ -2079,7 +2035,7 @@ static int blkif_recover(struct blkfront_info *info)
                kick_pending_request_queues(rinfo);
        }
 
-       list_for_each_entry_safe(req, n, &requests, queuelist) {
+       list_for_each_entry_safe(req, n, &info->requests, queuelist) {
                /* Requeue pending requests (flush or discard) */
                list_del_init(&req->queuelist);
                BUG_ON(req->nr_phys_segments > segs);
@@ -2087,7 +2043,7 @@ static int blkif_recover(struct blkfront_info *info)
        }
        blk_mq_kick_requeue_list(info->rq);
 
-       while ((bio = bio_list_pop(&bio_list)) != NULL) {
+       while ((bio = bio_list_pop(&info->bio_list)) != NULL) {
                /* Traverse the list of pending bios and re-queue them */
                if (bio_segments(bio) > segs) {
                        /*
@@ -2133,9 +2089,42 @@ static int blkfront_resume(struct xenbus_device *dev)
 {
        struct blkfront_info *info = dev_get_drvdata(&dev->dev);
        int err = 0;
+       unsigned int i, j;
 
        dev_dbg(&dev->dev, "blkfront_resume: %s\n", dev->nodename);
 
+       bio_list_init(&info->bio_list);
+       INIT_LIST_HEAD(&info->requests);
+       for (i = 0; i < info->nr_rings; i++) {
+               struct blkfront_ring_info *rinfo = &info->rinfo[i];
+               struct bio_list merge_bio;
+               struct blk_shadow *shadow = rinfo->shadow;
+
+               for (j = 0; j < BLK_RING_SIZE(info); j++) {
+                       /* Not in use? */
+                       if (!shadow[j].request)
+                               continue;
+
+                       /*
+                        * Get the bios in the request so we can re-queue them.
+                        */
+                       if (shadow[j].request->cmd_flags &
+                                       (REQ_FLUSH | REQ_FUA | REQ_DISCARD | REQ_SECURE)) {
+                               /*
+                                * Flush operations don't contain bios, so
+                                * we need to requeue the whole request
+                                */
+                               list_add(&shadow[j].request->queuelist, &info->requests);
+                               continue;
+                       }
+                       merge_bio.head = shadow[j].request->bio;
+                       merge_bio.tail = shadow[j].request->biotail;
+                       bio_list_merge(&info->bio_list, &merge_bio);
+                       shadow[j].request->bio = NULL;
+                       blk_mq_end_request(shadow[j].request, 0);
+               }
+       }
+
        blkif_free(info, info->connected == BLKIF_STATE_CONNECTED);
 
        err = negotiate_mq(info);
index 2589468..fadba88 100644 (file)
@@ -123,6 +123,7 @@ static const struct usb_device_id ath3k_table[] = {
        { USB_DEVICE(0x13d3, 0x3472) },
        { USB_DEVICE(0x13d3, 0x3474) },
        { USB_DEVICE(0x13d3, 0x3487) },
+       { USB_DEVICE(0x13d3, 0x3490) },
 
        /* Atheros AR5BBU12 with sflash firmware */
        { USB_DEVICE(0x0489, 0xE02C) },
@@ -190,6 +191,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
        { USB_DEVICE(0x13d3, 0x3472), .driver_info = BTUSB_ATH3012 },
        { USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 },
        { USB_DEVICE(0x13d3, 0x3487), .driver_info = BTUSB_ATH3012 },
+       { USB_DEVICE(0x13d3, 0x3490), .driver_info = BTUSB_ATH3012 },
 
        /* Atheros AR5BBU22 with sflash firmware */
        { USB_DEVICE(0x0489, 0xE036), .driver_info = BTUSB_ATH3012 },
index fd6b53e..a9932fe 100644 (file)
@@ -274,6 +274,8 @@ static int bpa10x_setup(struct hci_dev *hdev)
 
        BT_INFO("%s: %s", hdev->name, (char *)(skb->data + 1));
 
+       hci_set_fw_info(hdev, "%s", skb->data + 1);
+
        kfree_skb(skb);
        return 0;
 }
index 7ad8d61..e6a85f0 100644 (file)
@@ -138,7 +138,7 @@ int btmrvl_process_event(struct btmrvl_private *priv, struct sk_buff *skb)
                        if (event->length > 3 && event->data[3])
                                priv->btmrvl_dev.dev_type = HCI_AMP;
                        else
-                               priv->btmrvl_dev.dev_type = HCI_BREDR;
+                               priv->btmrvl_dev.dev_type = HCI_PRIMARY;
 
                        BT_DBG("dev_type: %d", priv->btmrvl_dev.dev_type);
                } else if (priv->btmrvl_dev.sendcmdflag &&
index f425ddf..d02f2c1 100644 (file)
@@ -1071,7 +1071,6 @@ static int btmrvl_sdio_host_to_card(struct btmrvl_private *priv,
 {
        struct btmrvl_sdio_card *card = priv->btmrvl_dev.card;
        int ret = 0;
-       int buf_block_len;
        int blksz;
        int i = 0;
        u8 *buf = NULL;
@@ -1083,9 +1082,13 @@ static int btmrvl_sdio_host_to_card(struct btmrvl_private *priv,
                return -EINVAL;
        }
 
+       blksz = DIV_ROUND_UP(nb, SDIO_BLOCK_SIZE) * SDIO_BLOCK_SIZE;
+
        buf = payload;
-       if ((unsigned long) payload & (BTSDIO_DMA_ALIGN - 1)) {
-               tmpbufsz = ALIGN_SZ(nb, BTSDIO_DMA_ALIGN);
+       if ((unsigned long) payload & (BTSDIO_DMA_ALIGN - 1) ||
+           nb < blksz) {
+               tmpbufsz = ALIGN_SZ(blksz, BTSDIO_DMA_ALIGN) +
+                          BTSDIO_DMA_ALIGN;
                tmpbuf = kzalloc(tmpbufsz, GFP_KERNEL);
                if (!tmpbuf)
                        return -ENOMEM;
@@ -1093,15 +1096,12 @@ static int btmrvl_sdio_host_to_card(struct btmrvl_private *priv,
                memcpy(buf, payload, nb);
        }
 
-       blksz = SDIO_BLOCK_SIZE;
-       buf_block_len = DIV_ROUND_UP(nb, blksz);
-
        sdio_claim_host(card->func);
 
        do {
                /* Transfer data to card */
                ret = sdio_writesb(card->func, card->ioport, buf,
-                                  buf_block_len * blksz);
+                                  blksz);
                if (ret < 0) {
                        i++;
                        BT_ERR("i=%d writesb failed: %d", i, ret);
@@ -1625,6 +1625,7 @@ static int btmrvl_sdio_suspend(struct device *dev)
        if (priv->adapter->hs_state != HS_ACTIVATED) {
                if (btmrvl_enable_hs(priv)) {
                        BT_ERR("HS not actived, suspend failed!");
+                       priv->adapter->is_suspending = false;
                        return -EBUSY;
                }
        }
index 2b05661..1cb958e 100644 (file)
@@ -311,7 +311,7 @@ static int btsdio_probe(struct sdio_func *func,
        if (id->class == SDIO_CLASS_BT_AMP)
                hdev->dev_type = HCI_AMP;
        else
-               hdev->dev_type = HCI_BREDR;
+               hdev->dev_type = HCI_PRIMARY;
 
        data->hdev = hdev;
 
index a3be65e..811f9b9 100644 (file)
@@ -237,6 +237,7 @@ static const struct usb_device_id blacklist_table[] = {
        { USB_DEVICE(0x13d3, 0x3472), .driver_info = BTUSB_ATH3012 },
        { USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 },
        { USB_DEVICE(0x13d3, 0x3487), .driver_info = BTUSB_ATH3012 },
+       { USB_DEVICE(0x13d3, 0x3490), .driver_info = BTUSB_ATH3012 },
 
        /* Atheros AR5BBU12 with sflash firmware */
        { USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE },
@@ -249,6 +250,7 @@ static const struct usb_device_id blacklist_table[] = {
        { USB_DEVICE(0x0cf3, 0xe007), .driver_info = BTUSB_QCA_ROME },
        { USB_DEVICE(0x0cf3, 0xe300), .driver_info = BTUSB_QCA_ROME },
        { USB_DEVICE(0x0cf3, 0xe360), .driver_info = BTUSB_QCA_ROME },
+       { USB_DEVICE(0x0489, 0xe092), .driver_info = BTUSB_QCA_ROME },
 
        /* Broadcom BCM2035 */
        { USB_DEVICE(0x0a5c, 0x2009), .driver_info = BTUSB_BCM92035 },
@@ -314,6 +316,7 @@ static const struct usb_device_id blacklist_table[] = {
        { USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL },
        { USB_DEVICE(0x8087, 0x0a2a), .driver_info = BTUSB_INTEL },
        { USB_DEVICE(0x8087, 0x0a2b), .driver_info = BTUSB_INTEL_NEW },
+       { USB_DEVICE(0x8087, 0x0aa7), .driver_info = BTUSB_INTEL },
 
        /* Other Intel Bluetooth devices */
        { USB_VENDOR_AND_INTERFACE_INFO(0x8087, 0xe0, 0x01, 0x01),
@@ -2103,10 +2106,14 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
        /* With this Intel bootloader only the hardware variant and device
         * revision information are used to select the right firmware.
         *
-        * Currently this bootloader support is limited to hardware variant
-        * iBT 3.0 (LnP/SfP) which is identified by the value 11 (0x0b).
+        * The firmware filename is ibt-<hw_variant>-<dev_revid>.sfi.
+        *
+        * Currently the supported hardware variants are:
+        *   11 (0x0b) for iBT3.0 (LnP/SfP)
+        *   12 (0x0c) for iBT3.5 (WsP)
         */
-       snprintf(fwname, sizeof(fwname), "intel/ibt-11-%u.sfi",
+       snprintf(fwname, sizeof(fwname), "intel/ibt-%u-%u.sfi",
+                le16_to_cpu(ver.hw_variant),
                 le16_to_cpu(params->dev_revid));
 
        err = request_firmware(&fw, fwname, &hdev->dev);
@@ -2122,7 +2129,8 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
        /* Save the DDC file name for later use to apply once the firmware
         * downloading is done.
         */
-       snprintf(fwname, sizeof(fwname), "intel/ibt-11-%u.ddc",
+       snprintf(fwname, sizeof(fwname), "intel/ibt-%u-%u.ddc",
+                le16_to_cpu(ver.hw_variant),
                 le16_to_cpu(params->dev_revid));
 
        kfree_skb(skb);
@@ -2825,7 +2833,7 @@ static int btusb_probe(struct usb_interface *intf,
        if (id->driver_info & BTUSB_AMP)
                hdev->dev_type = HCI_AMP;
        else
-               hdev->dev_type = HCI_BREDR;
+               hdev->dev_type = HCI_PRIMARY;
 
        data->hdev = hdev;
 
index 24a652f..485281b 100644 (file)
@@ -51,7 +51,7 @@
  */
 struct ti_st {
        struct hci_dev *hdev;
-       char reg_status;
+       int reg_status;
        long (*st_write) (struct sk_buff *);
        struct completion wait_reg_completion;
 };
@@ -83,7 +83,7 @@ static inline void ti_st_tx_complete(struct ti_st *hst, int pkt_type)
  * status.ti_st_open() function will wait for signal from this
  * API when st_register() function returns ST_PENDING.
  */
-static void st_reg_completion_cb(void *priv_data, char data)
+static void st_reg_completion_cb(void *priv_data, int data)
 {
        struct ti_st *lhst = priv_data;
 
index f6f2b01..ed0a420 100644 (file)
@@ -537,9 +537,7 @@ static int intel_setup(struct hci_uart *hu)
 {
        static const u8 reset_param[] = { 0x00, 0x01, 0x00, 0x01,
                                          0x00, 0x08, 0x04, 0x00 };
-       static const u8 lpm_param[] = { 0x03, 0x07, 0x01, 0x0b };
        struct intel_data *intel = hu->priv;
-       struct intel_device *idev = NULL;
        struct hci_dev *hdev = hu->hdev;
        struct sk_buff *skb;
        struct intel_version ver;
@@ -884,35 +882,23 @@ done:
 
        bt_dev_info(hdev, "Device booted in %llu usecs", duration);
 
-       /* Enable LPM if matching pdev with wakeup enabled */
+       /* Enable LPM if matching pdev with wakeup enabled, set TX active
+        * until further LPM TX notification.
+        */
        mutex_lock(&intel_device_list_lock);
        list_for_each(p, &intel_device_list) {
                struct intel_device *dev = list_entry(p, struct intel_device,
                                                      list);
                if (hu->tty->dev->parent == dev->pdev->dev.parent) {
-                       if (device_may_wakeup(&dev->pdev->dev))
-                               idev = dev;
+                       if (device_may_wakeup(&dev->pdev->dev)) {
+                               set_bit(STATE_LPM_ENABLED, &intel->flags);
+                               set_bit(STATE_TX_ACTIVE, &intel->flags);
+                       }
                        break;
                }
        }
        mutex_unlock(&intel_device_list_lock);
 
-       if (!idev)
-               goto no_lpm;
-
-       bt_dev_info(hdev, "Enabling LPM");
-
-       skb = __hci_cmd_sync(hdev, 0xfc8b, sizeof(lpm_param), lpm_param,
-                            HCI_CMD_TIMEOUT);
-       if (IS_ERR(skb)) {
-               bt_dev_err(hdev, "Failed to enable LPM");
-               goto no_lpm;
-       }
-       kfree_skb(skb);
-
-       set_bit(STATE_LPM_ENABLED, &intel->flags);
-
-no_lpm:
        /* Ignore errors, device can work without DDC parameters */
        btintel_load_ddc_config(hdev, fwname);
 
index 49b3e1e..dda9739 100644 (file)
@@ -609,7 +609,7 @@ static int hci_uart_register_dev(struct hci_uart *hu)
        if (test_bit(HCI_UART_CREATE_AMP, &hu->hdev_flags))
                hdev->dev_type = HCI_AMP;
        else
-               hdev->dev_type = HCI_BREDR;
+               hdev->dev_type = HCI_PRIMARY;
 
        if (test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags))
                return 0;
index aba3121..3ff229b 100644 (file)
@@ -97,10 +97,10 @@ static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
        if (data->hdev)
                return -EBADFD;
 
-       /* bits 0-1 are dev_type (BR/EDR or AMP) */
+       /* bits 0-1 are dev_type (Primary or AMP) */
        dev_type = opcode & 0x03;
 
-       if (dev_type != HCI_BREDR && dev_type != HCI_AMP)
+       if (dev_type != HCI_PRIMARY && dev_type != HCI_AMP)
                return -EINVAL;
 
        /* bits 2-5 are reserved (must be zero) */
@@ -316,7 +316,7 @@ static void vhci_open_timeout(struct work_struct *work)
        struct vhci_data *data = container_of(work, struct vhci_data,
                                              open_timeout.work);
 
-       vhci_create_device(data, amp ? HCI_AMP : HCI_BREDR);
+       vhci_create_device(data, amp ? HCI_AMP : HCI_PRIMARY);
 }
 
 static int vhci_open(struct inode *inode, struct file *file)
index 10f846c..25d5906 100644 (file)
@@ -99,7 +99,7 @@ static int clk_programmable_set_parent(struct clk_hw *hw, u8 index)
        struct clk_programmable *prog = to_clk_programmable(hw);
        const struct clk_programmable_layout *layout = prog->layout;
        unsigned int mask = layout->css_mask;
-       unsigned int pckr = 0;
+       unsigned int pckr = index;
 
        if (layout->have_slck_mck)
                mask |= AT91_PMC_CSSMCK_MCK;
index 445a749..9780fac 100644 (file)
@@ -33,6 +33,8 @@ struct sun4i_a10_display_clk_data {
 
        u8      width_div;
        u8      width_mux;
+
+       u32     flags;
 };
 
 struct reset_data {
@@ -166,7 +168,7 @@ static void __init sun4i_a10_display_init(struct device_node *node,
                                     data->has_div ? &div->hw : NULL,
                                     data->has_div ? &clk_divider_ops : NULL,
                                     &gate->hw, &clk_gate_ops,
-                                    0);
+                                    data->flags);
        if (IS_ERR(clk)) {
                pr_err("%s: Couldn't register the clock\n", clk_name);
                goto free_div;
@@ -232,6 +234,7 @@ static const struct sun4i_a10_display_clk_data sun4i_a10_tcon_ch0_data __initcon
        .offset_rst     = 29,
        .offset_mux     = 24,
        .width_mux      = 2,
+       .flags          = CLK_SET_RATE_PARENT,
 };
 
 static void __init sun4i_a10_tcon_ch0_setup(struct device_node *node)
index 98a4582..b6d29d1 100644 (file)
@@ -79,15 +79,11 @@ static int tcon_ch1_is_enabled(struct clk_hw *hw)
 static u8 tcon_ch1_get_parent(struct clk_hw *hw)
 {
        struct tcon_ch1_clk *tclk = hw_to_tclk(hw);
-       int num_parents = clk_hw_get_num_parents(hw);
        u32 reg;
 
        reg = readl(tclk->reg) >> TCON_CH1_SCLK2_MUX_SHIFT;
        reg &= reg >> TCON_CH1_SCLK2_MUX_MASK;
 
-       if (reg >= num_parents)
-               return -EINVAL;
-
        return reg;
 }
 
index a4d0059..c73207a 100644 (file)
@@ -173,7 +173,7 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
 
        struct cpuidle_state *target_state = &drv->states[index];
        bool broadcast = !!(target_state->flags & CPUIDLE_FLAG_TIMER_STOP);
-       u64 time_start, time_end;
+       ktime_t time_start, time_end;
        s64 diff;
 
        /*
@@ -195,13 +195,13 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
        sched_idle_set_state(target_state);
 
        trace_cpu_idle_rcuidle(index, dev->cpu);
-       time_start = local_clock();
+       time_start = ns_to_ktime(local_clock());
 
        stop_critical_timings();
        entered_state = target_state->enter(dev, drv, index);
        start_critical_timings();
 
-       time_end = local_clock();
+       time_end = ns_to_ktime(local_clock());
        trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
 
        /* The cpu is no longer idle or about to enter idle. */
@@ -217,11 +217,7 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
        if (!cpuidle_state_is_coupled(drv, index))
                local_irq_enable();
 
-       /*
-        * local_clock() returns the time in nanosecond, let's shift
-        * by 10 (divide by 1024) to have microsecond based time.
-        */
-       diff = (time_end - time_start) >> 10;
+       diff = ktime_us_delta(time_end, time_start);
        if (diff > INT_MAX)
                diff = INT_MAX;
 
index 6d74b91..5fc3dbb 100644 (file)
@@ -2,6 +2,7 @@ $(obj)/qat_rsapubkey-asn1.o: $(obj)/qat_rsapubkey-asn1.c \
                             $(obj)/qat_rsapubkey-asn1.h
 $(obj)/qat_rsaprivkey-asn1.o: $(obj)/qat_rsaprivkey-asn1.c \
                              $(obj)/qat_rsaprivkey-asn1.h
+$(obj)/qat_asym_algs.o: $(obj)/qat_rsapubkey-asn1.h $(obj)/qat_rsaprivkey-asn1.h
 
 clean-files += qat_rsapubkey-asn1.c qat_rsapubkey-asn1.h
 clean-files += qat_rsaprivkey-asn1.c qat_rsaprivkey-asn1.h
index 6744d88..4fb2eb7 100644 (file)
@@ -2378,22 +2378,19 @@ static int sbridge_get_onedevice(struct pci_dev **prev,
  * @num_mc: pointer to the memory controllers count, to be incremented in case
  *         of success.
  * @table: model specific table
- * @allow_dups: allow for multiple devices to exist with the same device id
- *              (as implemented, this isn't expected to work correctly in the
- *              multi-socket case).
- * @multi_bus: don't assume devices on different buses belong to different
- *             memory controllers.
  *
  * returns 0 in case of success or error code
  */
-static int sbridge_get_all_devices_full(u8 *num_mc,
-                                       const struct pci_id_table *table,
-                                       int allow_dups,
-                                       int multi_bus)
+static int sbridge_get_all_devices(u8 *num_mc,
+                                       const struct pci_id_table *table)
 {
        int i, rc;
        struct pci_dev *pdev = NULL;
+       int allow_dups = 0;
+       int multi_bus = 0;
 
+       if (table->type == KNIGHTS_LANDING)
+               allow_dups = multi_bus = 1;
        while (table && table->descr) {
                for (i = 0; i < table->n_devs; i++) {
                        if (!allow_dups || i == 0 ||
@@ -2420,11 +2417,6 @@ static int sbridge_get_all_devices_full(u8 *num_mc,
        return 0;
 }
 
-#define sbridge_get_all_devices(num_mc, table) \
-               sbridge_get_all_devices_full(num_mc, table, 0, 0)
-#define sbridge_get_all_devices_knl(num_mc, table) \
-               sbridge_get_all_devices_full(num_mc, table, 1, 1)
-
 static int sbridge_mci_bind_devs(struct mem_ctl_info *mci,
                                 struct sbridge_dev *sbridge_dev)
 {
index cebcb40..d786061 100644 (file)
@@ -49,7 +49,7 @@ config GPIO_DEVRES
 
 config OF_GPIO
        def_bool y
-       depends on OF || COMPILE_TEST
+       depends on OF
 
 config GPIO_ACPI
        def_bool y
@@ -402,9 +402,12 @@ config GPIO_TB10X
        select OF_GPIO
 
 config GPIO_TEGRA
-       bool
-       default y
+       bool "NVIDIA Tegra GPIO support"
+       default ARCH_TEGRA
        depends on ARCH_TEGRA || COMPILE_TEST
+       depends on OF
+       help
+         Say yes here to support GPIO pins on NVIDIA Tegra SoCs.
 
 config GPIO_TS4800
        tristate "TS-4800 DIO blocks and compatibles"
index e85e753..eb43ae4 100644 (file)
@@ -61,9 +61,8 @@ static unsigned sch_gpio_bit(struct sch_gpio *sch, unsigned gpio)
        return gpio % 8;
 }
 
-static int sch_gpio_reg_get(struct gpio_chip *gc, unsigned gpio, unsigned reg)
+static int sch_gpio_reg_get(struct sch_gpio *sch, unsigned gpio, unsigned reg)
 {
-       struct sch_gpio *sch = gpiochip_get_data(gc);
        unsigned short offset, bit;
        u8 reg_val;
 
@@ -75,10 +74,9 @@ static int sch_gpio_reg_get(struct gpio_chip *gc, unsigned gpio, unsigned reg)
        return reg_val;
 }
 
-static void sch_gpio_reg_set(struct gpio_chip *gc, unsigned gpio, unsigned reg,
+static void sch_gpio_reg_set(struct sch_gpio *sch, unsigned gpio, unsigned reg,
                             int val)
 {
-       struct sch_gpio *sch = gpiochip_get_data(gc);
        unsigned short offset, bit;
        u8 reg_val;
 
@@ -98,14 +96,15 @@ static int sch_gpio_direction_in(struct gpio_chip *gc, unsigned gpio_num)
        struct sch_gpio *sch = gpiochip_get_data(gc);
 
        spin_lock(&sch->lock);
-       sch_gpio_reg_set(gc, gpio_num, GIO, 1);
+       sch_gpio_reg_set(sch, gpio_num, GIO, 1);
        spin_unlock(&sch->lock);
        return 0;
 }
 
 static int sch_gpio_get(struct gpio_chip *gc, unsigned gpio_num)
 {
-       return sch_gpio_reg_get(gc, gpio_num, GLV);
+       struct sch_gpio *sch = gpiochip_get_data(gc);
+       return sch_gpio_reg_get(sch, gpio_num, GLV);
 }
 
 static void sch_gpio_set(struct gpio_chip *gc, unsigned gpio_num, int val)
@@ -113,7 +112,7 @@ static void sch_gpio_set(struct gpio_chip *gc, unsigned gpio_num, int val)
        struct sch_gpio *sch = gpiochip_get_data(gc);
 
        spin_lock(&sch->lock);
-       sch_gpio_reg_set(gc, gpio_num, GLV, val);
+       sch_gpio_reg_set(sch, gpio_num, GLV, val);
        spin_unlock(&sch->lock);
 }
 
@@ -123,7 +122,7 @@ static int sch_gpio_direction_out(struct gpio_chip *gc, unsigned gpio_num,
        struct sch_gpio *sch = gpiochip_get_data(gc);
 
        spin_lock(&sch->lock);
-       sch_gpio_reg_set(gc, gpio_num, GIO, 0);
+       sch_gpio_reg_set(sch, gpio_num, GIO, 0);
        spin_unlock(&sch->lock);
 
        /*
@@ -182,13 +181,13 @@ static int sch_gpio_probe(struct platform_device *pdev)
                 * GPIO7 is configured by the CMC as SLPIOVR
                 * Enable GPIO[9:8] core powered gpios explicitly
                 */
-               sch_gpio_reg_set(&sch->chip, 8, GEN, 1);
-               sch_gpio_reg_set(&sch->chip, 9, GEN, 1);
+               sch_gpio_reg_set(sch, 8, GEN, 1);
+               sch_gpio_reg_set(sch, 9, GEN, 1);
                /*
                 * SUS_GPIO[2:0] enabled by default
                 * Enable SUS_GPIO3 resume powered gpio explicitly
                 */
-               sch_gpio_reg_set(&sch->chip, 13, GEN, 1);
+               sch_gpio_reg_set(sch, 13, GEN, 1);
                break;
 
        case PCI_DEVICE_ID_INTEL_ITC_LPC:
index 3a5c701..8b83099 100644 (file)
@@ -28,6 +28,10 @@ int gpio_request_one(unsigned gpio, unsigned long flags, const char *label)
        if (!desc && gpio_is_valid(gpio))
                return -EPROBE_DEFER;
 
+       err = gpiod_request(desc, label);
+       if (err)
+               return err;
+
        if (flags & GPIOF_OPEN_DRAIN)
                set_bit(FLAG_OPEN_DRAIN, &desc->flags);
 
@@ -37,10 +41,6 @@ int gpio_request_one(unsigned gpio, unsigned long flags, const char *label)
        if (flags & GPIOF_ACTIVE_LOW)
                set_bit(FLAG_ACTIVE_LOW, &desc->flags);
 
-       err = gpiod_request(desc, label);
-       if (err)
-               return err;
-
        if (flags & GPIOF_DIR_IN)
                err = gpiod_direction_input(desc);
        else
index 570771e..be74bd3 100644 (file)
@@ -1352,14 +1352,6 @@ static int __gpiod_request(struct gpio_desc *desc, const char *label)
                spin_lock_irqsave(&gpio_lock, flags);
        }
 done:
-       if (status < 0) {
-               /* Clear flags that might have been set by the caller before
-                * requesting the GPIO.
-                */
-               clear_bit(FLAG_ACTIVE_LOW, &desc->flags);
-               clear_bit(FLAG_OPEN_DRAIN, &desc->flags);
-               clear_bit(FLAG_OPEN_SOURCE, &desc->flags);
-       }
        spin_unlock_irqrestore(&gpio_lock, flags);
        return status;
 }
@@ -2587,28 +2579,13 @@ struct gpio_desc *__must_check gpiod_get_optional(struct device *dev,
 }
 EXPORT_SYMBOL_GPL(gpiod_get_optional);
 
-/**
- * gpiod_parse_flags - helper function to parse GPIO lookup flags
- * @desc:      gpio to be setup
- * @lflags:    gpio_lookup_flags - returned from of_find_gpio() or
- *             of_get_gpio_hog()
- *
- * Set the GPIO descriptor flags based on the given GPIO lookup flags.
- */
-static void gpiod_parse_flags(struct gpio_desc *desc, unsigned long lflags)
-{
-       if (lflags & GPIO_ACTIVE_LOW)
-               set_bit(FLAG_ACTIVE_LOW, &desc->flags);
-       if (lflags & GPIO_OPEN_DRAIN)
-               set_bit(FLAG_OPEN_DRAIN, &desc->flags);
-       if (lflags & GPIO_OPEN_SOURCE)
-               set_bit(FLAG_OPEN_SOURCE, &desc->flags);
-}
 
 /**
  * gpiod_configure_flags - helper function to configure a given GPIO
  * @desc:      gpio whose value will be assigned
  * @con_id:    function within the GPIO consumer
+ * @lflags:    gpio_lookup_flags - returned from of_find_gpio() or
+ *             of_get_gpio_hog()
  * @dflags:    gpiod_flags - optional GPIO initialization flags
  *
  * Return 0 on success, -ENOENT if no GPIO has been assigned to the
@@ -2616,10 +2593,17 @@ static void gpiod_parse_flags(struct gpio_desc *desc, unsigned long lflags)
  * occurred while trying to acquire the GPIO.
  */
 static int gpiod_configure_flags(struct gpio_desc *desc, const char *con_id,
-                                enum gpiod_flags dflags)
+               unsigned long lflags, enum gpiod_flags dflags)
 {
        int status;
 
+       if (lflags & GPIO_ACTIVE_LOW)
+               set_bit(FLAG_ACTIVE_LOW, &desc->flags);
+       if (lflags & GPIO_OPEN_DRAIN)
+               set_bit(FLAG_OPEN_DRAIN, &desc->flags);
+       if (lflags & GPIO_OPEN_SOURCE)
+               set_bit(FLAG_OPEN_SOURCE, &desc->flags);
+
        /* No particular flag request, return here... */
        if (!(dflags & GPIOD_FLAGS_BIT_DIR_SET)) {
                pr_debug("no flags found for %s\n", con_id);
@@ -2686,13 +2670,11 @@ struct gpio_desc *__must_check gpiod_get_index(struct device *dev,
                return desc;
        }
 
-       gpiod_parse_flags(desc, lookupflags);
-
        status = gpiod_request(desc, con_id);
        if (status < 0)
                return ERR_PTR(status);
 
-       status = gpiod_configure_flags(desc, con_id, flags);
+       status = gpiod_configure_flags(desc, con_id, lookupflags, flags);
        if (status < 0) {
                dev_dbg(dev, "setup of GPIO %s failed\n", con_id);
                gpiod_put(desc);
@@ -2748,6 +2730,10 @@ struct gpio_desc *fwnode_get_named_gpiod(struct fwnode_handle *fwnode,
        if (IS_ERR(desc))
                return desc;
 
+       ret = gpiod_request(desc, NULL);
+       if (ret)
+               return ERR_PTR(ret);
+
        if (active_low)
                set_bit(FLAG_ACTIVE_LOW, &desc->flags);
 
@@ -2758,10 +2744,6 @@ struct gpio_desc *fwnode_get_named_gpiod(struct fwnode_handle *fwnode,
                        set_bit(FLAG_OPEN_SOURCE, &desc->flags);
        }
 
-       ret = gpiod_request(desc, NULL);
-       if (ret)
-               return ERR_PTR(ret);
-
        return desc;
 }
 EXPORT_SYMBOL_GPL(fwnode_get_named_gpiod);
@@ -2814,8 +2796,6 @@ int gpiod_hog(struct gpio_desc *desc, const char *name,
        chip = gpiod_to_chip(desc);
        hwnum = gpio_chip_hwgpio(desc);
 
-       gpiod_parse_flags(desc, lflags);
-
        local_desc = gpiochip_request_own_desc(chip, hwnum, name);
        if (IS_ERR(local_desc)) {
                status = PTR_ERR(local_desc);
@@ -2824,7 +2804,7 @@ int gpiod_hog(struct gpio_desc *desc, const char *name,
                return status;
        }
 
-       status = gpiod_configure_flags(desc, name, dflags);
+       status = gpiod_configure_flags(desc, name, lflags, dflags);
        if (status < 0) {
                pr_err("setup of hog GPIO %s (chip %s, offset %d) failed, %d\n",
                       name, chip->label, hwnum, status);
index 13cdb01..bc56c8a 100644 (file)
@@ -156,3 +156,18 @@ u32 amdgpu_atombios_i2c_func(struct i2c_adapter *adap)
        return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;
 }
 
+void amdgpu_atombios_i2c_channel_trans(struct amdgpu_device* adev, u8 slave_addr, u8 line_number, u8 offset, u8 data)
+{
+       PROCESS_I2C_CHANNEL_TRANSACTION_PS_ALLOCATION args;
+       int index = GetIndexIntoMasterTable(COMMAND, ProcessI2cChannelTransaction);
+
+       args.ucRegIndex = offset;
+       args.lpI2CDataOut = data;
+       args.ucFlag = 1;
+       args.ucI2CSpeed = TARGET_HW_I2C_CLOCK;
+       args.ucTransBytes = 1;
+       args.ucSlaveAddr = slave_addr;
+       args.ucLineNumber = line_number;
+
+       amdgpu_atom_execute_table(adev->mode_info.atom_context, index, (uint32_t *)&args);
+}
index d6128d9..251aaf4 100644 (file)
@@ -27,5 +27,7 @@
 int amdgpu_atombios_i2c_xfer(struct i2c_adapter *i2c_adap,
                      struct i2c_msg *msgs, int num);
 u32 amdgpu_atombios_i2c_func(struct i2c_adapter *adap);
+void amdgpu_atombios_i2c_channel_trans(struct amdgpu_device* adev,
+               u8 slave_addr, u8 line_number, u8 offset, u8 data);
 
 #endif
index b2ebd4f..c2ef945 100644 (file)
@@ -28,6 +28,7 @@
 #include "vid.h"
 #include "amdgpu_ucode.h"
 #include "amdgpu_atombios.h"
+#include "atombios_i2c.h"
 #include "clearstate_vi.h"
 
 #include "gmc/gmc_8_2_d.h"
@@ -284,6 +285,7 @@ static const u32 golden_settings_polaris11_a11[] =
        mmTCP_ADDR_CONFIG, 0x000003ff, 0x000000f3,
        mmTCP_CHAN_STEER_HI, 0xffffffff, 0x00000000,
        mmTCP_CHAN_STEER_LO, 0xffffffff, 0x00003210,
+       mmVGT_RESET_DEBUG, 0x00000004, 0x00000004,
 };
 
 static const u32 polaris11_golden_common_all[] =
@@ -314,6 +316,7 @@ static const u32 golden_settings_polaris10_a11[] =
        mmTCC_CTRL, 0x00100000, 0xf31fff7f,
        mmTCP_ADDR_CONFIG, 0x000003ff, 0x000000f7,
        mmTCP_CHAN_STEER_HI, 0xffffffff, 0x00000000,
+       mmVGT_RESET_DEBUG, 0x00000004, 0x00000004,
 };
 
 static const u32 polaris10_golden_common_all[] =
@@ -696,6 +699,10 @@ static void gfx_v8_0_init_golden_registers(struct amdgpu_device *adev)
                                                 polaris10_golden_common_all,
                                                 (const u32)ARRAY_SIZE(polaris10_golden_common_all));
                WREG32_SMC(ixCG_ACLK_CNTL, 0x0000001C);
+               if (adev->pdev->revision == 0xc7) {
+                       amdgpu_atombios_i2c_channel_trans(adev, 0x10, 0x96, 0x1E, 0xDD);
+                       amdgpu_atombios_i2c_channel_trans(adev, 0x10, 0x96, 0x1F, 0xD0);
+               }
                break;
        case CHIP_CARRIZO:
                amdgpu_program_register_sequence(adev,
index ec2a7ad..91e25f9 100644 (file)
@@ -98,7 +98,6 @@
 #define PCIE_BUS_CLK                10000
 #define TCLK                        (PCIE_BUS_CLK / 10)
 
-#define CEILING_UCHAR(double) ((double-(uint8_t)(double)) > 0 ? (uint8_t)(double+1) : (uint8_t)(double))
 
 static const uint16_t polaris10_clock_stretcher_lookup_table[2][4] =
 { {600, 1050, 3, 0}, {600, 1050, 6, 1} };
@@ -733,7 +732,7 @@ static int polaris10_populate_smc_mvdd_table(struct pp_hwmgr *hwmgr,
                        table->Smio[level] |=
                                data->mvdd_voltage_table.entries[level].smio_low;
                }
-               table->SmioMask2 = data->vddci_voltage_table.mask_low;
+               table->SmioMask2 = data->mvdd_voltage_table.mask_low;
 
                table->MvddLevelCount = (uint32_t) PP_HOST_TO_SMC_UL(count);
        }
@@ -1807,27 +1806,25 @@ static int polaris10_populate_clock_stretcher_data_table(struct pp_hwmgr *hwmgr)
 
        ro = efuse * (max -min)/255 + min;
 
-       /* Populate Sclk_CKS_masterEn0_7 and Sclk_voltageOffset
-        * there is a little difference in calculating
-        * volt_with_cks with windows */
+       /* Populate Sclk_CKS_masterEn0_7 and Sclk_voltageOffset */
        for (i = 0; i < sclk_table->count; i++) {
                data->smc_state_table.Sclk_CKS_masterEn0_7 |=
                                sclk_table->entries[i].cks_enable << i;
                if (hwmgr->chip_id == CHIP_POLARIS10) {
-                       volt_without_cks = (uint32_t)((2753594000 + (sclk_table->entries[i].clk/100) * 136418 -(ro - 70) * 1000000) / \
+                       volt_without_cks = (uint32_t)((2753594000U + (sclk_table->entries[i].clk/100) * 136418 -(ro - 70) * 1000000) / \
                                                (2424180 - (sclk_table->entries[i].clk/100) * 1132925/1000));
-                       volt_with_cks = (uint32_t)((279720200 + sclk_table->entries[i].clk * 3232 - (ro - 65) * 100000000) / \
-                                       (252248000 - sclk_table->entries[i].clk/100 * 115764));
+                       volt_with_cks = (uint32_t)((2797202000U + sclk_table->entries[i].clk/100 * 3232 - (ro - 65) * 1000000) / \
+                                       (2522480 - sclk_table->entries[i].clk/100 * 115764/100));
                } else {
-                       volt_without_cks = (uint32_t)((2416794800 + (sclk_table->entries[i].clk/100) * 1476925/10 -(ro - 50) * 1000000) / \
-                                               (2625416 - (sclk_table->entries[i].clk/100) * 12586807/10000));
-                       volt_with_cks = (uint32_t)((2999656000 + sclk_table->entries[i].clk * 392803/100 - (ro - 44) * 1000000) / \
-                                       (3422454 - sclk_table->entries[i].clk/100 * 18886376/10000));
+                       volt_without_cks = (uint32_t)((2416794800U + (sclk_table->entries[i].clk/100) * 1476925/10 -(ro - 50) * 1000000) / \
+                                               (2625416 - (sclk_table->entries[i].clk/100) * (12586807/10000)));
+                       volt_with_cks = (uint32_t)((2999656000U - sclk_table->entries[i].clk/100 * 392803 - (ro - 44) * 1000000) / \
+                                       (3422454 - sclk_table->entries[i].clk/100 * (18886376/10000)));
                }
 
                if (volt_without_cks >= volt_with_cks)
-                       volt_offset = (uint8_t)CEILING_UCHAR((volt_without_cks - volt_with_cks +
-                                       sclk_table->entries[i].cks_voffset) * 100 / 625);
+                       volt_offset = (uint8_t)(((volt_without_cks - volt_with_cks +
+                                       sclk_table->entries[i].cks_voffset) * 100 + 624) / 625);
 
                data->smc_state_table.Sclk_voltageOffset[i] = volt_offset;
        }
@@ -2685,7 +2682,7 @@ static int polaris10_get_evv_voltages(struct pp_hwmgr *hwmgr)
 {
        struct polaris10_hwmgr *data = (struct polaris10_hwmgr *)(hwmgr->backend);
        uint16_t vv_id;
-       uint16_t vddc = 0;
+       uint32_t vddc = 0;
        uint16_t i, j;
        uint32_t sclk = 0;
        struct phm_ppt_v1_information *table_info =
@@ -2716,8 +2713,9 @@ static int polaris10_get_evv_voltages(struct pp_hwmgr *hwmgr)
                                                continue);
 
 
-                       /* need to make sure vddc is less than 2v or else, it could burn the ASIC. */
-                       PP_ASSERT_WITH_CODE((vddc < 2000 && vddc != 0),
+                       /* need to make sure vddc is less than 2v or else, it could burn the ASIC.
+                        * real voltage level in unit of 0.01mv */
+                       PP_ASSERT_WITH_CODE((vddc < 200000 && vddc != 0),
                                        "Invalid VDDC value", result = -EINVAL;);
 
                        /* the voltage should not be zero nor equal to leakage ID */
index bf4e18f..90b35c5 100644 (file)
@@ -1256,7 +1256,7 @@ int atomctrl_set_ac_timing_ai(struct pp_hwmgr *hwmgr, uint32_t memory_clock,
 }
 
 int atomctrl_get_voltage_evv_on_sclk_ai(struct pp_hwmgr *hwmgr, uint8_t voltage_type,
-                               uint32_t sclk, uint16_t virtual_voltage_Id, uint16_t *voltage)
+                               uint32_t sclk, uint16_t virtual_voltage_Id, uint32_t *voltage)
 {
 
        int result;
@@ -1274,7 +1274,7 @@ int atomctrl_get_voltage_evv_on_sclk_ai(struct pp_hwmgr *hwmgr, uint8_t voltage_
        if (0 != result)
                return result;
 
-       *voltage = get_voltage_info_param_space.usVoltageLevel;
+       *voltage = ((GET_EVV_VOLTAGE_INFO_OUTPUT_PARAMETER_V1_3 *)(&get_voltage_info_param_space))->ulVoltageLevel;
 
        return result;
 }
index 248c5db..1e35a96 100644 (file)
@@ -305,7 +305,7 @@ extern int atomctrl_get_engine_pll_dividers_ai(struct pp_hwmgr *hwmgr, uint32_t
 extern int atomctrl_set_ac_timing_ai(struct pp_hwmgr *hwmgr, uint32_t memory_clock,
                                                                uint8_t level);
 extern int atomctrl_get_voltage_evv_on_sclk_ai(struct pp_hwmgr *hwmgr, uint8_t voltage_type,
-                               uint32_t sclk, uint16_t virtual_voltage_Id, uint16_t *voltage);
+                               uint32_t sclk, uint16_t virtual_voltage_Id, uint32_t *voltage);
 extern int atomctrl_get_smc_sclk_range_table(struct pp_hwmgr *hwmgr, struct pp_atom_ctrl_sclk_range_table *table);
 
 extern int atomctrl_get_avfs_information(struct pp_hwmgr *hwmgr, struct pp_atom_ctrl__avfs_parameters *param);
index 233eb7f..5d0f655 100644 (file)
@@ -1302,7 +1302,7 @@ static int tonga_populate_smc_mvdd_table(struct pp_hwmgr *hwmgr,
                        table->Smio[count] |=
                                data->mvdd_voltage_table.entries[count].smio_low;
                }
-               table->SmioMask2 = data->vddci_voltage_table.mask_low;
+               table->SmioMask2 = data->mvdd_voltage_table.mask_low;
 
                CONVERT_FROM_HOST_TO_SMC_UL(table->MvddLevelCount);
        }
index 671fdb4..dccc859 100644 (file)
@@ -302,7 +302,7 @@ static int init_dpm_2_parameters(
                        (((unsigned long)powerplay_table) + le16_to_cpu(powerplay_table->usPPMTableOffset));
 
                if (0 != powerplay_table->usPPMTableOffset) {
-                       if (1 == get_platform_power_management_table(hwmgr, atom_ppm_table)) {
+                       if (get_platform_power_management_table(hwmgr, atom_ppm_table) == 0) {
                                phm_cap_set(hwmgr->platform_descriptor.platformCaps,
                                        PHM_PlatformCaps_EnablePlatformPowerManagement);
                        }
index f313b4d..85c4deb 100644 (file)
@@ -512,6 +512,10 @@ void intel_detect_pch(struct drm_device *dev)
                                DRM_DEBUG_KMS("Found SunrisePoint LP PCH\n");
                                WARN_ON(!IS_SKYLAKE(dev) &&
                                        !IS_KABYLAKE(dev));
+                       } else if (id == INTEL_PCH_KBP_DEVICE_ID_TYPE) {
+                               dev_priv->pch_type = PCH_KBP;
+                               DRM_DEBUG_KMS("Found KabyPoint PCH\n");
+                               WARN_ON(!IS_KABYLAKE(dev));
                        } else if ((id == INTEL_PCH_P2X_DEVICE_ID_TYPE) ||
                                   (id == INTEL_PCH_P3X_DEVICE_ID_TYPE) ||
                                   ((id == INTEL_PCH_QEMU_DEVICE_ID_TYPE) &&
index 7c334e9..bc3f2e6 100644 (file)
@@ -990,6 +990,7 @@ enum intel_pch {
        PCH_CPT,        /* Cougarpoint PCH */
        PCH_LPT,        /* Lynxpoint PCH */
        PCH_SPT,        /* Sunrisepoint PCH */
+       PCH_KBP,        /* Kabypoint PCH */
        PCH_NOP,
 };
 
@@ -2600,6 +2601,15 @@ struct drm_i915_cmd_table {
 
 #define IS_BXT_REVID(p, since, until) (IS_BROXTON(p) && IS_REVID(p, since, until))
 
+#define KBL_REVID_A0           0x0
+#define KBL_REVID_B0           0x1
+#define KBL_REVID_C0           0x2
+#define KBL_REVID_D0           0x3
+#define KBL_REVID_E0           0x4
+
+#define IS_KBL_REVID(p, since, until) \
+       (IS_KABYLAKE(p) && IS_REVID(p, since, until))
+
 /*
  * The genX designation typically refers to the render engine, so render
  * capability related checks should use IS_GEN, while display and other checks
@@ -2708,11 +2718,13 @@ struct drm_i915_cmd_table {
 #define INTEL_PCH_LPT_LP_DEVICE_ID_TYPE                0x9c00
 #define INTEL_PCH_SPT_DEVICE_ID_TYPE           0xA100
 #define INTEL_PCH_SPT_LP_DEVICE_ID_TYPE                0x9D00
+#define INTEL_PCH_KBP_DEVICE_ID_TYPE           0xA200
 #define INTEL_PCH_P2X_DEVICE_ID_TYPE           0x7100
 #define INTEL_PCH_P3X_DEVICE_ID_TYPE           0x7000
 #define INTEL_PCH_QEMU_DEVICE_ID_TYPE          0x2900 /* qemu q35 has 2918 */
 
 #define INTEL_PCH_TYPE(dev) (__I915__(dev)->pch_type)
+#define HAS_PCH_KBP(dev) (INTEL_PCH_TYPE(dev) == PCH_KBP)
 #define HAS_PCH_SPT(dev) (INTEL_PCH_TYPE(dev) == PCH_SPT)
 #define HAS_PCH_LPT(dev) (INTEL_PCH_TYPE(dev) == PCH_LPT)
 #define HAS_PCH_LPT_LP(dev) (__I915__(dev)->pch_id == INTEL_PCH_LPT_LP_DEVICE_ID_TYPE)
index 425e721..6657146 100644 (file)
@@ -40,7 +40,7 @@ static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
        if (!mutex_is_locked(mutex))
                return false;
 
-#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
+#if defined(CONFIG_DEBUG_MUTEXES) || defined(CONFIG_MUTEX_SPIN_ON_OWNER)
        return mutex->owner == task;
 #else
        /* Since UP may be pre-empted, we cannot assume that we own the lock */
index b7ce963..44004e3 100644 (file)
@@ -55,8 +55,10 @@ int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *dev_priv,
                return -ENODEV;
 
        /* See the comment at the drm_mm_init() call for more about this check.
-        * WaSkipStolenMemoryFirstPage:bdw,chv (incomplete) */
-       if (INTEL_INFO(dev_priv)->gen == 8 && start < 4096)
+        * WaSkipStolenMemoryFirstPage:bdw,chv,kbl (incomplete)
+        */
+       if (start < 4096 && (IS_GEN8(dev_priv) ||
+                            IS_KBL_REVID(dev_priv, 0, KBL_REVID_A0)))
                start = 4096;
 
        mutex_lock(&dev_priv->mm.stolen_lock);
index 2f6fd33..aab47f7 100644 (file)
@@ -2471,7 +2471,7 @@ gen8_de_irq_handler(struct drm_i915_private *dev_priv, u32 master_ctl)
                        I915_WRITE(SDEIIR, iir);
                        ret = IRQ_HANDLED;
 
-                       if (HAS_PCH_SPT(dev_priv))
+                       if (HAS_PCH_SPT(dev_priv) || HAS_PCH_KBP(dev_priv))
                                spt_irq_handler(dev, iir);
                        else
                                cpt_irq_handler(dev, iir);
@@ -4661,7 +4661,7 @@ void intel_irq_init(struct drm_i915_private *dev_priv)
                dev->driver->disable_vblank = gen8_disable_vblank;
                if (IS_BROXTON(dev))
                        dev_priv->display.hpd_irq_setup = bxt_hpd_irq_setup;
-               else if (HAS_PCH_SPT(dev))
+               else if (HAS_PCH_SPT(dev) || HAS_PCH_KBP(dev))
                        dev_priv->display.hpd_irq_setup = spt_hpd_irq_setup;
                else
                        dev_priv->display.hpd_irq_setup = ilk_hpd_irq_setup;
index b407411..3fcf7dd 100644 (file)
@@ -220,6 +220,9 @@ static inline bool i915_mmio_reg_valid(i915_reg_t reg)
 #define   ECOCHK_PPGTT_WT_HSW          (0x2<<3)
 #define   ECOCHK_PPGTT_WB_HSW          (0x3<<3)
 
+#define GEN8_CONFIG0                   _MMIO(0xD00)
+#define  GEN9_DEFAULT_FIXES            (1 << 3 | 1 << 2 | 1 << 1)
+
 #define GAC_ECO_BITS                   _MMIO(0x14090)
 #define   ECOBITS_SNB_BIT              (1<<13)
 #define   ECOBITS_PPGTT_CACHE64B       (3<<8)
@@ -1669,6 +1672,9 @@ enum skl_disp_power_wells {
 
 #define GEN7_TLB_RD_ADDR       _MMIO(0x4700)
 
+#define GAMT_CHKN_BIT_REG      _MMIO(0x4ab8)
+#define   GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING     (1<<28)
+
 #if 0
 #define PRB0_TAIL      _MMIO(0x2030)
 #define PRB0_HEAD      _MMIO(0x2034)
@@ -1804,6 +1810,10 @@ enum skl_disp_power_wells {
 #define   GEN9_IZ_HASHING_MASK(slice)                  (0x3 << ((slice) * 2))
 #define   GEN9_IZ_HASHING(slice, val)                  ((val) << ((slice) * 2))
 
+/* chicken reg for WaConextSwitchWithConcurrentTLBInvalidate */
+#define GEN9_CSFE_CHICKEN1_RCS _MMIO(0x20D4)
+#define   GEN9_PREEMPT_GPGPU_SYNC_SWITCH_DISABLE (1 << 2)
+
 /* WaClearTdlStateAckDirtyBits */
 #define GEN8_STATE_ACK         _MMIO(0x20F0)
 #define GEN9_STATE_ACK_SLICE1  _MMIO(0x20F8)
@@ -2200,6 +2210,8 @@ enum skl_disp_power_wells {
 #define ILK_DPFC_STATUS                _MMIO(0x43210)
 #define ILK_DPFC_FENCE_YOFF    _MMIO(0x43218)
 #define ILK_DPFC_CHICKEN       _MMIO(0x43224)
+#define   ILK_DPFC_DISABLE_DUMMY0 (1<<8)
+#define   ILK_DPFC_NUKE_ON_ANY_MODIFICATION    (1<<23)
 #define ILK_FBC_RT_BASE                _MMIO(0x2128)
 #define   ILK_FBC_RT_VALID     (1<<0)
 #define   SNB_FBC_FRONT_BUFFER (1<<1)
@@ -6031,6 +6043,7 @@ enum skl_disp_power_wells {
 #define CHICKEN_PAR1_1         _MMIO(0x42080)
 #define  DPA_MASK_VBLANK_SRD   (1 << 15)
 #define  FORCE_ARB_IDLE_PLANES (1 << 14)
+#define  SKL_EDP_PSR_FIX_RDWRAP        (1 << 3)
 
 #define _CHICKEN_PIPESL_1_A    0x420b0
 #define _CHICKEN_PIPESL_1_B    0x420b4
@@ -6039,6 +6052,7 @@ enum skl_disp_power_wells {
 #define CHICKEN_PIPESL_1(pipe) _MMIO_PIPE(pipe, _CHICKEN_PIPESL_1_A, _CHICKEN_PIPESL_1_B)
 
 #define DISP_ARB_CTL   _MMIO(0x45000)
+#define  DISP_FBC_MEMORY_WAKE          (1<<31)
 #define  DISP_TILE_SURFACE_SWIZZLING   (1<<13)
 #define  DISP_FBC_WM_DIS               (1<<15)
 #define DISP_ARB_CTL2  _MMIO(0x45004)
@@ -6052,6 +6066,9 @@ enum skl_disp_power_wells {
 #define HSW_NDE_RSTWRN_OPT     _MMIO(0x46408)
 #define  RESET_PCH_HANDSHAKE_ENABLE    (1<<4)
 
+#define GEN8_CHICKEN_DCPR_1            _MMIO(0x46430)
+#define   MASK_WAKEMEM                 (1<<13)
+
 #define SKL_DFSM                       _MMIO(0x51000)
 #define SKL_DFSM_CDCLK_LIMIT_MASK      (3 << 23)
 #define SKL_DFSM_CDCLK_LIMIT_675       (0 << 23)
@@ -6069,6 +6086,7 @@ enum skl_disp_power_wells {
 #define  GEN9_TSG_BARRIER_ACK_DISABLE          (1<<8)
 
 #define GEN9_CS_DEBUG_MODE1            _MMIO(0x20ec)
+#define GEN9_CTX_PREEMPT_REG           _MMIO(0x2248)
 #define GEN8_CS_CHICKEN1               _MMIO(0x2580)
 
 /* GEN7 chicken */
@@ -6076,6 +6094,7 @@ enum skl_disp_power_wells {
 # define GEN7_CSC1_RHWO_OPT_DISABLE_IN_RCC     ((1<<10) | (1<<26))
 # define GEN9_RHWO_OPTIMIZATION_DISABLE                (1<<14)
 #define COMMON_SLICE_CHICKEN2                  _MMIO(0x7014)
+# define GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION (1<<8)
 # define GEN8_CSC2_SBE_VUE_CACHE_CONSERVATIVE  (1<<0)
 
 #define HIZ_CHICKEN                                    _MMIO(0x7018)
@@ -6921,6 +6940,7 @@ enum skl_disp_power_wells {
 #define    EDRAM_SETS_IDX(cap)                 (((cap) >> 8) & 0x3)
 
 #define GEN6_UCGCTL1                           _MMIO(0x9400)
+# define GEN6_GAMUNIT_CLOCK_GATE_DISABLE               (1 << 22)
 # define GEN6_EU_TCUNIT_CLOCK_GATE_DISABLE             (1 << 16)
 # define GEN6_BLBUNIT_CLOCK_GATE_DISABLE               (1 << 5)
 # define GEN6_CSUNIT_CLOCK_GATE_DISABLE                        (1 << 7)
@@ -6937,6 +6957,7 @@ enum skl_disp_power_wells {
 
 #define GEN7_UCGCTL4                           _MMIO(0x940c)
 #define  GEN7_L3BANK2X_CLOCK_GATE_DISABLE      (1<<25)
+#define  GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE     (1<<14)
 
 #define GEN6_RCGCTL1                           _MMIO(0x9410)
 #define GEN6_RCGCTL2                           _MMIO(0x9414)
index a34c23e..2b3b428 100644 (file)
  * be moved to FW_FAILED.
  */
 
+#define I915_CSR_KBL "i915/kbl_dmc_ver1.bin"
+MODULE_FIRMWARE(I915_CSR_KBL);
+#define KBL_CSR_VERSION_REQUIRED       CSR_VERSION(1, 1)
+
 #define I915_CSR_SKL "i915/skl_dmc_ver1.bin"
+MODULE_FIRMWARE(I915_CSR_SKL);
+#define SKL_CSR_VERSION_REQUIRED       CSR_VERSION(1, 23)
+
 #define I915_CSR_BXT "i915/bxt_dmc_ver1.bin"
+MODULE_FIRMWARE(I915_CSR_BXT);
+#define BXT_CSR_VERSION_REQUIRED       CSR_VERSION(1, 7)
 
 #define FIRMWARE_URL  "https://01.org/linuxgraphics/intel-linux-graphics-firmwares"
 
-MODULE_FIRMWARE(I915_CSR_SKL);
-MODULE_FIRMWARE(I915_CSR_BXT);
 
-#define SKL_CSR_VERSION_REQUIRED       CSR_VERSION(1, 23)
-#define BXT_CSR_VERSION_REQUIRED       CSR_VERSION(1, 7)
+
 
 #define CSR_MAX_FW_SIZE                        0x2FFF
 #define CSR_DEFAULT_FW_OFFSET          0xFFFFFFFF
@@ -169,12 +175,10 @@ struct stepping_info {
        char substepping;
 };
 
-/*
- * Kabylake derivated from Skylake H0, so SKL H0
- * is the right firmware for KBL A0 (revid 0).
- */
 static const struct stepping_info kbl_stepping_info[] = {
-       {'H', '0'}, {'I', '0'}
+       {'A', '0'}, {'B', '0'}, {'C', '0'},
+       {'D', '0'}, {'E', '0'}, {'F', '0'},
+       {'G', '0'}, {'H', '0'}, {'I', '0'},
 };
 
 static const struct stepping_info skl_stepping_info[] = {
@@ -298,7 +302,9 @@ static uint32_t *parse_csr_fw(struct drm_i915_private *dev_priv,
 
        csr->version = css_header->version;
 
-       if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
+       if (IS_KABYLAKE(dev_priv)) {
+               required_min_version = KBL_CSR_VERSION_REQUIRED;
+       } else if (IS_SKYLAKE(dev_priv)) {
                required_min_version = SKL_CSR_VERSION_REQUIRED;
        } else if (IS_BROXTON(dev_priv)) {
                required_min_version = BXT_CSR_VERSION_REQUIRED;
@@ -446,7 +452,9 @@ void intel_csr_ucode_init(struct drm_i915_private *dev_priv)
        if (!HAS_CSR(dev_priv))
                return;
 
-       if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv))
+       if (IS_KABYLAKE(dev_priv))
+               csr->fw_path = I915_CSR_KBL;
+       else if (IS_SKYLAKE(dev_priv))
                csr->fw_path = I915_CSR_SKL;
        else if (IS_BROXTON(dev_priv))
                csr->fw_path = I915_CSR_BXT;
index 04452cf..3074c56 100644 (file)
@@ -11997,6 +11997,12 @@ static int intel_crtc_atomic_check(struct drm_crtc *crtc,
                ret = intel_color_check(crtc, crtc_state);
                if (ret)
                        return ret;
+
+               /*
+                * Changing color management on Intel hardware is
+                * handled as part of planes update.
+                */
+               crtc_state->planes_changed = true;
        }
 
        ret = 0;
index 40745e3..891107f 100644 (file)
@@ -4645,7 +4645,7 @@ intel_dp_detect(struct drm_connector *connector, bool force)
 
        intel_dp->detect_done = false;
 
-       if (intel_connector->detect_edid)
+       if (is_edp(intel_dp) || intel_connector->detect_edid)
                return connector_status_connected;
        else
                return connector_status_disconnected;
index 42eac37..7f2d841 100644 (file)
@@ -1103,15 +1103,17 @@ static inline int gen8_emit_flush_coherentl3_wa(struct intel_engine_cs *engine,
                                                uint32_t *const batch,
                                                uint32_t index)
 {
+       struct drm_i915_private *dev_priv = engine->dev->dev_private;
        uint32_t l3sqc4_flush = (0x40400000 | GEN8_LQSC_FLUSH_COHERENT_LINES);
 
        /*
-        * WaDisableLSQCROPERFforOCL:skl
+        * WaDisableLSQCROPERFforOCL:skl,kbl
         * This WA is implemented in skl_init_clock_gating() but since
         * this batch updates GEN8_L3SQCREG4 with default value we need to
         * set this bit here to retain the WA during flush.
         */
-       if (IS_SKL_REVID(engine->dev, 0, SKL_REVID_E0))
+       if (IS_SKL_REVID(dev_priv, 0, SKL_REVID_E0) ||
+           IS_KBL_REVID(dev_priv, 0, KBL_REVID_E0))
                l3sqc4_flush |= GEN8_LQSC_RO_PERF_DIS;
 
        wa_ctx_emit(batch, index, (MI_STORE_REGISTER_MEM_GEN8 |
@@ -1273,6 +1275,7 @@ static int gen9_init_indirectctx_bb(struct intel_engine_cs *engine,
 {
        int ret;
        struct drm_device *dev = engine->dev;
+       struct drm_i915_private *dev_priv = dev->dev_private;
        uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS);
 
        /* WaDisableCtxRestoreArbitration:skl,bxt */
@@ -1286,6 +1289,22 @@ static int gen9_init_indirectctx_bb(struct intel_engine_cs *engine,
                return ret;
        index = ret;
 
+       /* WaClearSlmSpaceAtContextSwitch:kbl */
+       /* Actual scratch location is at 128 bytes offset */
+       if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_A0)) {
+               uint32_t scratch_addr
+                       = engine->scratch.gtt_offset + 2*CACHELINE_BYTES;
+
+               wa_ctx_emit(batch, index, GFX_OP_PIPE_CONTROL(6));
+               wa_ctx_emit(batch, index, (PIPE_CONTROL_FLUSH_L3 |
+                                          PIPE_CONTROL_GLOBAL_GTT_IVB |
+                                          PIPE_CONTROL_CS_STALL |
+                                          PIPE_CONTROL_QW_WRITE));
+               wa_ctx_emit(batch, index, scratch_addr);
+               wa_ctx_emit(batch, index, 0);
+               wa_ctx_emit(batch, index, 0);
+               wa_ctx_emit(batch, index, 0);
+       }
        /* Pad to end of cacheline */
        while (index % CACHELINE_DWORDS)
                wa_ctx_emit(batch, index, MI_NOOP);
@@ -1687,9 +1706,10 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
        struct intel_ringbuffer *ringbuf = request->ringbuf;
        struct intel_engine_cs *engine = ringbuf->engine;
        u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
-       bool vf_flush_wa = false;
+       bool vf_flush_wa = false, dc_flush_wa = false;
        u32 flags = 0;
        int ret;
+       int len;
 
        flags |= PIPE_CONTROL_CS_STALL;
 
@@ -1716,9 +1736,21 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
                 */
                if (IS_GEN9(engine->dev))
                        vf_flush_wa = true;
+
+               /* WaForGAMHang:kbl */
+               if (IS_KBL_REVID(request->i915, 0, KBL_REVID_B0))
+                       dc_flush_wa = true;
        }
 
-       ret = intel_ring_begin(request, vf_flush_wa ? 12 : 6);
+       len = 6;
+
+       if (vf_flush_wa)
+               len += 6;
+
+       if (dc_flush_wa)
+               len += 12;
+
+       ret = intel_ring_begin(request, len);
        if (ret)
                return ret;
 
@@ -1731,12 +1763,31 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
                intel_logical_ring_emit(ringbuf, 0);
        }
 
+       if (dc_flush_wa) {
+               intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
+               intel_logical_ring_emit(ringbuf, PIPE_CONTROL_DC_FLUSH_ENABLE);
+               intel_logical_ring_emit(ringbuf, 0);
+               intel_logical_ring_emit(ringbuf, 0);
+               intel_logical_ring_emit(ringbuf, 0);
+               intel_logical_ring_emit(ringbuf, 0);
+       }
+
        intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
        intel_logical_ring_emit(ringbuf, flags);
        intel_logical_ring_emit(ringbuf, scratch_addr);
        intel_logical_ring_emit(ringbuf, 0);
        intel_logical_ring_emit(ringbuf, 0);
        intel_logical_ring_emit(ringbuf, 0);
+
+       if (dc_flush_wa) {
+               intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
+               intel_logical_ring_emit(ringbuf, PIPE_CONTROL_CS_STALL);
+               intel_logical_ring_emit(ringbuf, 0);
+               intel_logical_ring_emit(ringbuf, 0);
+               intel_logical_ring_emit(ringbuf, 0);
+               intel_logical_ring_emit(ringbuf, 0);
+       }
+
        intel_logical_ring_advance(ringbuf);
 
        return 0;
index 99e2603..16e209d 100644 (file)
@@ -1038,5 +1038,16 @@ intel_opregion_get_panel_type(struct drm_device *dev)
                return -ENODEV;
        }
 
+       /*
+        * FIXME On Dell XPS 13 9350 the OpRegion panel type (0) gives us
+        * low vswing for eDP, whereas the VBT panel type (2) gives us normal
+        * vswing instead. Low vswing results in some display flickers, so
+        * let's simply ignore the OpRegion panel type on SKL for now.
+        */
+       if (IS_SKYLAKE(dev)) {
+               DRM_DEBUG_KMS("Ignoring OpRegion panel type (%d)\n", ret - 1);
+               return -ENODEV;
+       }
+
        return ret - 1;
 }
index 8357d57..aba9409 100644 (file)
@@ -1731,7 +1731,8 @@ intel_panel_init_backlight_funcs(struct intel_panel *panel)
                panel->backlight.set = bxt_set_backlight;
                panel->backlight.get = bxt_get_backlight;
                panel->backlight.hz_to_pwm = bxt_hz_to_pwm;
-       } else if (HAS_PCH_LPT(dev_priv) || HAS_PCH_SPT(dev_priv)) {
+       } else if (HAS_PCH_LPT(dev_priv) || HAS_PCH_SPT(dev_priv) ||
+                  HAS_PCH_KBP(dev_priv)) {
                panel->backlight.setup = lpt_setup_backlight;
                panel->backlight.enable = lpt_enable_backlight;
                panel->backlight.disable = lpt_disable_backlight;
index a7ef45d..2863b92 100644 (file)
 #define INTEL_RC6p_ENABLE                      (1<<1)
 #define INTEL_RC6pp_ENABLE                     (1<<2)
 
+static void gen9_init_clock_gating(struct drm_device *dev)
+{
+       struct drm_i915_private *dev_priv = dev->dev_private;
+
+       /* See Bspec note for PSR2_CTL bit 31, Wa#828:skl,bxt,kbl */
+       I915_WRITE(CHICKEN_PAR1_1,
+                  I915_READ(CHICKEN_PAR1_1) | SKL_EDP_PSR_FIX_RDWRAP);
+
+       I915_WRITE(GEN8_CONFIG0,
+                  I915_READ(GEN8_CONFIG0) | GEN9_DEFAULT_FIXES);
+
+       /* WaEnableChickenDCPR:skl,bxt,kbl */
+       I915_WRITE(GEN8_CHICKEN_DCPR_1,
+                  I915_READ(GEN8_CHICKEN_DCPR_1) | MASK_WAKEMEM);
+
+       /* WaFbcTurnOffFbcWatermark:skl,bxt,kbl */
+       /* WaFbcWakeMemOn:skl,bxt,kbl */
+       I915_WRITE(DISP_ARB_CTL, I915_READ(DISP_ARB_CTL) |
+                  DISP_FBC_WM_DIS |
+                  DISP_FBC_MEMORY_WAKE);
+
+       /* WaFbcHighMemBwCorruptionAvoidance:skl,bxt,kbl */
+       I915_WRITE(ILK_DPFC_CHICKEN, I915_READ(ILK_DPFC_CHICKEN) |
+                  ILK_DPFC_DISABLE_DUMMY0);
+}
+
 static void bxt_init_clock_gating(struct drm_device *dev)
 {
        struct drm_i915_private *dev_priv = dev->dev_private;
 
+       gen9_init_clock_gating(dev);
+
        /* WaDisableSDEUnitClockGating:bxt */
        I915_WRITE(GEN8_UCGCTL6, I915_READ(GEN8_UCGCTL6) |
                   GEN8_SDEUNIT_CLOCK_GATE_DISABLE);
@@ -6698,6 +6726,38 @@ static void lpt_suspend_hw(struct drm_device *dev)
        }
 }
 
+static void kabylake_init_clock_gating(struct drm_device *dev)
+{
+       struct drm_i915_private *dev_priv = dev->dev_private;
+
+       gen9_init_clock_gating(dev);
+
+       /* WaDisableSDEUnitClockGating:kbl */
+       if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_B0))
+               I915_WRITE(GEN8_UCGCTL6, I915_READ(GEN8_UCGCTL6) |
+                          GEN8_SDEUNIT_CLOCK_GATE_DISABLE);
+
+       /* WaDisableGamClockGating:kbl */
+       if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_B0))
+               I915_WRITE(GEN6_UCGCTL1, I915_READ(GEN6_UCGCTL1) |
+                          GEN6_GAMUNIT_CLOCK_GATE_DISABLE);
+
+       /* WaFbcNukeOnHostModify:kbl */
+       I915_WRITE(ILK_DPFC_CHICKEN, I915_READ(ILK_DPFC_CHICKEN) |
+                  ILK_DPFC_NUKE_ON_ANY_MODIFICATION);
+}
+
+static void skylake_init_clock_gating(struct drm_device *dev)
+{
+       struct drm_i915_private *dev_priv = dev->dev_private;
+
+       gen9_init_clock_gating(dev);
+
+       /* WaFbcNukeOnHostModify:skl */
+       I915_WRITE(ILK_DPFC_CHICKEN, I915_READ(ILK_DPFC_CHICKEN) |
+                  ILK_DPFC_NUKE_ON_ANY_MODIFICATION);
+}
+
 static void broadwell_init_clock_gating(struct drm_device *dev)
 {
        struct drm_i915_private *dev_priv = dev->dev_private;
@@ -7163,9 +7223,9 @@ static void nop_init_clock_gating(struct drm_device *dev)
 void intel_init_clock_gating_hooks(struct drm_i915_private *dev_priv)
 {
        if (IS_SKYLAKE(dev_priv))
-               dev_priv->display.init_clock_gating = nop_init_clock_gating;
+               dev_priv->display.init_clock_gating = skylake_init_clock_gating;
        else if (IS_KABYLAKE(dev_priv))
-               dev_priv->display.init_clock_gating = nop_init_clock_gating;
+               dev_priv->display.init_clock_gating = kabylake_init_clock_gating;
        else if (IS_BROXTON(dev_priv))
                dev_priv->display.init_clock_gating = bxt_init_clock_gating;
        else if (IS_BROADWELL(dev_priv))
index 04402bb..68c5af0 100644 (file)
@@ -913,24 +913,26 @@ static int gen9_init_workarounds(struct intel_engine_cs *engine)
 {
        struct drm_device *dev = engine->dev;
        struct drm_i915_private *dev_priv = dev->dev_private;
-       uint32_t tmp;
        int ret;
 
-       /* WaEnableLbsSlaRetryTimerDecrement:skl */
+       /* WaConextSwitchWithConcurrentTLBInvalidate:skl,bxt,kbl */
+       I915_WRITE(GEN9_CSFE_CHICKEN1_RCS, _MASKED_BIT_ENABLE(GEN9_PREEMPT_GPGPU_SYNC_SWITCH_DISABLE));
+
+       /* WaEnableLbsSlaRetryTimerDecrement:skl,bxt,kbl */
        I915_WRITE(BDW_SCRATCH1, I915_READ(BDW_SCRATCH1) |
                   GEN9_LBS_SLA_RETRY_TIMER_DECREMENT_ENABLE);
 
-       /* WaDisableKillLogic:bxt,skl */
+       /* WaDisableKillLogic:bxt,skl,kbl */
        I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) |
                   ECOCHK_DIS_TLB);
 
-       /* WaClearFlowControlGpgpuContextSave:skl,bxt */
-       /* WaDisablePartialInstShootdown:skl,bxt */
+       /* WaClearFlowControlGpgpuContextSave:skl,bxt,kbl */
+       /* WaDisablePartialInstShootdown:skl,bxt,kbl */
        WA_SET_BIT_MASKED(GEN8_ROW_CHICKEN,
                          FLOW_CONTROL_ENABLE |
                          PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE);
 
-       /* Syncing dependencies between camera and graphics:skl,bxt */
+       /* Syncing dependencies between camera and graphics:skl,bxt,kbl */
        WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN3,
                          GEN9_DISABLE_OCL_OOB_SUPPRESS_LOGIC);
 
@@ -952,18 +954,18 @@ static int gen9_init_workarounds(struct intel_engine_cs *engine)
                 */
        }
 
-       /* WaEnableYV12BugFixInHalfSliceChicken7:skl,bxt */
-       /* WaEnableSamplerGPGPUPreemptionSupport:skl,bxt */
+       /* WaEnableYV12BugFixInHalfSliceChicken7:skl,bxt,kbl */
+       /* WaEnableSamplerGPGPUPreemptionSupport:skl,bxt,kbl */
        WA_SET_BIT_MASKED(GEN9_HALF_SLICE_CHICKEN7,
                          GEN9_ENABLE_YV12_BUGFIX |
                          GEN9_ENABLE_GPGPU_PREEMPTION);
 
-       /* Wa4x4STCOptimizationDisable:skl,bxt */
-       /* WaDisablePartialResolveInVc:skl,bxt */
+       /* Wa4x4STCOptimizationDisable:skl,bxt,kbl */
+       /* WaDisablePartialResolveInVc:skl,bxt,kbl */
        WA_SET_BIT_MASKED(CACHE_MODE_1, (GEN8_4x4_STC_OPTIMIZATION_DISABLE |
                                         GEN9_PARTIAL_RESOLVE_IN_VC_DISABLE));
 
-       /* WaCcsTlbPrefetchDisable:skl,bxt */
+       /* WaCcsTlbPrefetchDisable:skl,bxt,kbl */
        WA_CLR_BIT_MASKED(GEN9_HALF_SLICE_CHICKEN5,
                          GEN9_CCS_TLB_PREFETCH_ENABLE);
 
@@ -973,31 +975,57 @@ static int gen9_init_workarounds(struct intel_engine_cs *engine)
                WA_SET_BIT_MASKED(SLICE_ECO_CHICKEN0,
                                  PIXEL_MASK_CAMMING_DISABLE);
 
-       /* WaForceContextSaveRestoreNonCoherent:skl,bxt */
-       tmp = HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT;
-       if (IS_SKL_REVID(dev, SKL_REVID_F0, REVID_FOREVER) ||
-           IS_BXT_REVID(dev, BXT_REVID_B0, REVID_FOREVER))
-               tmp |= HDC_FORCE_CSR_NON_COHERENT_OVR_DISABLE;
-       WA_SET_BIT_MASKED(HDC_CHICKEN0, tmp);
+       /* WaForceContextSaveRestoreNonCoherent:skl,bxt,kbl */
+       WA_SET_BIT_MASKED(HDC_CHICKEN0,
+                         HDC_FORCE_CONTEXT_SAVE_RESTORE_NON_COHERENT |
+                         HDC_FORCE_CSR_NON_COHERENT_OVR_DISABLE);
+
+       /* WaForceEnableNonCoherent and WaDisableHDCInvalidation are
+        * both tied to WaForceContextSaveRestoreNonCoherent
+        * in some hsds for skl. We keep the tie for all gen9. The
+        * documentation is a bit hazy and so we want to get common behaviour,
+        * even though there is no clear evidence we would need both on kbl/bxt.
+        * This area has been source of system hangs so we play it safe
+        * and mimic the skl regardless of what bspec says.
+        *
+        * Use Force Non-Coherent whenever executing a 3D context. This
+        * is a workaround for a possible hang in the unlikely event
+        * a TLB invalidation occurs during a PSD flush.
+        */
 
-       /* WaDisableSamplerPowerBypassForSOPingPong:skl,bxt */
-       if (IS_SKYLAKE(dev) || IS_BXT_REVID(dev, 0, BXT_REVID_B0))
+       /* WaForceEnableNonCoherent:skl,bxt,kbl */
+       WA_SET_BIT_MASKED(HDC_CHICKEN0,
+                         HDC_FORCE_NON_COHERENT);
+
+       /* WaDisableHDCInvalidation:skl,bxt,kbl */
+       I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) |
+                  BDW_DISABLE_HDC_INVALIDATION);
+
+       /* WaDisableSamplerPowerBypassForSOPingPong:skl,bxt,kbl */
+       if (IS_SKYLAKE(dev_priv) ||
+           IS_KABYLAKE(dev_priv) ||
+           IS_BXT_REVID(dev_priv, 0, BXT_REVID_B0))
                WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN3,
                                  GEN8_SAMPLER_POWER_BYPASS_DIS);
 
-       /* WaDisableSTUnitPowerOptimization:skl,bxt */
+       /* WaDisableSTUnitPowerOptimization:skl,bxt,kbl */
        WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN2, GEN8_ST_PO_DISABLE);
 
-       /* WaOCLCoherentLineFlush:skl,bxt */
+       /* WaOCLCoherentLineFlush:skl,bxt,kbl */
        I915_WRITE(GEN8_L3SQCREG4, (I915_READ(GEN8_L3SQCREG4) |
                                    GEN8_LQSC_FLUSH_COHERENT_LINES));
 
-       /* WaEnablePreemptionGranularityControlByUMD:skl,bxt */
+       /* WaVFEStateAfterPipeControlwithMediaStateClear:skl,bxt */
+       ret = wa_ring_whitelist_reg(engine, GEN9_CTX_PREEMPT_REG);
+       if (ret)
+               return ret;
+
+       /* WaEnablePreemptionGranularityControlByUMD:skl,bxt,kbl */
        ret= wa_ring_whitelist_reg(engine, GEN8_CS_CHICKEN1);
        if (ret)
                return ret;
 
-       /* WaAllowUMDToModifyHDCChicken1:skl,bxt */
+       /* WaAllowUMDToModifyHDCChicken1:skl,bxt,kbl */
        ret = wa_ring_whitelist_reg(engine, GEN8_HDC_CHICKEN1);
        if (ret)
                return ret;
@@ -1092,22 +1120,6 @@ static int skl_init_workarounds(struct intel_engine_cs *engine)
                WA_SET_BIT_MASKED(HIZ_CHICKEN,
                                  BDW_HIZ_POWER_COMPILER_CLOCK_GATING_DISABLE);
 
-       /* This is tied to WaForceContextSaveRestoreNonCoherent */
-       if (IS_SKL_REVID(dev, 0, REVID_FOREVER)) {
-               /*
-                *Use Force Non-Coherent whenever executing a 3D context. This
-                * is a workaround for a possible hang in the unlikely event
-                * a TLB invalidation occurs during a PSD flush.
-                */
-               /* WaForceEnableNonCoherent:skl */
-               WA_SET_BIT_MASKED(HDC_CHICKEN0,
-                                 HDC_FORCE_NON_COHERENT);
-
-               /* WaDisableHDCInvalidation:skl */
-               I915_WRITE(GAM_ECOCHK, I915_READ(GAM_ECOCHK) |
-                          BDW_DISABLE_HDC_INVALIDATION);
-       }
-
        /* WaBarrierPerformanceFixDisable:skl */
        if (IS_SKL_REVID(dev, SKL_REVID_C0, SKL_REVID_D0))
                WA_SET_BIT_MASKED(HDC_CHICKEN0,
@@ -1120,6 +1132,9 @@ static int skl_init_workarounds(struct intel_engine_cs *engine)
                        GEN7_HALF_SLICE_CHICKEN1,
                        GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE);
 
+       /* WaDisableGafsUnitClkGating:skl */
+       WA_SET_BIT(GEN7_UCGCTL4, GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE);
+
        /* WaDisableLSQCROPERFforOCL:skl */
        ret = wa_ring_whitelist_reg(engine, GEN8_L3SQCREG4);
        if (ret)
@@ -1174,6 +1189,63 @@ static int bxt_init_workarounds(struct intel_engine_cs *engine)
                        return ret;
        }
 
+       /* WaInsertDummyPushConstPs:bxt */
+       if (IS_BXT_REVID(dev_priv, 0, BXT_REVID_B0))
+               WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
+                                 GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION);
+
+       return 0;
+}
+
+static int kbl_init_workarounds(struct intel_engine_cs *engine)
+{
+       struct drm_i915_private *dev_priv = engine->dev->dev_private;
+       int ret;
+
+       ret = gen9_init_workarounds(engine);
+       if (ret)
+               return ret;
+
+       /* WaEnableGapsTsvCreditFix:kbl */
+       I915_WRITE(GEN8_GARBCNTL, (I915_READ(GEN8_GARBCNTL) |
+                                  GEN9_GAPS_TSV_CREDIT_DISABLE));
+
+       /* WaDisableDynamicCreditSharing:kbl */
+       if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_B0))
+               WA_SET_BIT(GAMT_CHKN_BIT_REG,
+                          GAMT_CHKN_DISABLE_DYNAMIC_CREDIT_SHARING);
+
+       /* WaDisableFenceDestinationToSLM:kbl (pre-prod) */
+       if (IS_KBL_REVID(dev_priv, KBL_REVID_A0, KBL_REVID_A0))
+               WA_SET_BIT_MASKED(HDC_CHICKEN0,
+                                 HDC_FENCE_DEST_SLM_DISABLE);
+
+       /* GEN8_L3SQCREG4 has a dependency with WA batch so any new changes
+        * involving this register should also be added to WA batch as required.
+        */
+       if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_E0))
+               /* WaDisableLSQCROPERFforOCL:kbl */
+               I915_WRITE(GEN8_L3SQCREG4, I915_READ(GEN8_L3SQCREG4) |
+                          GEN8_LQSC_RO_PERF_DIS);
+
+       /* WaInsertDummyPushConstPs:kbl */
+       if (IS_KBL_REVID(dev_priv, 0, KBL_REVID_B0))
+               WA_SET_BIT_MASKED(COMMON_SLICE_CHICKEN2,
+                                 GEN8_SBE_DISABLE_REPLAY_BUF_OPTIMIZATION);
+
+       /* WaDisableGafsUnitClkGating:kbl */
+       WA_SET_BIT(GEN7_UCGCTL4, GEN8_EU_GAUNIT_CLOCK_GATE_DISABLE);
+
+       /* WaDisableSbeCacheDispatchPortSharing:kbl */
+       WA_SET_BIT_MASKED(
+               GEN7_HALF_SLICE_CHICKEN1,
+               GEN7_SBE_SS_CACHE_DISPATCH_PORT_SHARING_DISABLE);
+
+       /* WaDisableLSQCROPERFforOCL:kbl */
+       ret = wa_ring_whitelist_reg(engine, GEN8_L3SQCREG4);
+       if (ret)
+               return ret;
+
        return 0;
 }
 
@@ -1199,6 +1271,9 @@ int init_workarounds_ring(struct intel_engine_cs *engine)
        if (IS_BROXTON(dev))
                return bxt_init_workarounds(engine);
 
+       if (IS_KABYLAKE(dev_priv))
+               return kbl_init_workarounds(engine);
+
        return 0;
 }
 
index 22706c0..49bd5da 100644 (file)
@@ -40,7 +40,8 @@ static int
 gf119_sor_dp_pattern(struct nvkm_output_dp *outp, int pattern)
 {
        struct nvkm_device *device = outp->base.disp->engine.subdev.device;
-       nvkm_mask(device, 0x61c110, 0x0f0f0f0f, 0x01010101 * pattern);
+       const u32 soff = gf119_sor_soff(outp);
+       nvkm_mask(device, 0x61c110 + soff, 0x0f0f0f0f, 0x01010101 * pattern);
        return 0;
 }
 
index 4182a21..41cacec 100644 (file)
@@ -65,6 +65,14 @@ static void sun4i_crtc_disable(struct drm_crtc *crtc)
        DRM_DEBUG_DRIVER("Disabling the CRTC\n");
 
        sun4i_tcon_disable(drv->tcon);
+
+       if (crtc->state->event && !crtc->state->active) {
+               spin_lock_irq(&crtc->dev->event_lock);
+               drm_crtc_send_vblank_event(crtc, crtc->state->event);
+               spin_unlock_irq(&crtc->dev->event_lock);
+
+               crtc->state->event = NULL;
+       }
 }
 
 static void sun4i_crtc_enable(struct drm_crtc *crtc)
index 257d2b4..937394c 100644 (file)
@@ -92,7 +92,7 @@ static struct drm_driver sun4i_drv_driver = {
        /* Frame Buffer Operations */
 
        /* VBlank Operations */
-       .get_vblank_counter     = drm_vblank_count,
+       .get_vblank_counter     = drm_vblank_no_hw_counter,
        .enable_vblank          = sun4i_drv_enable_vblank,
        .disable_vblank         = sun4i_drv_disable_vblank,
 };
@@ -310,6 +310,7 @@ static int sun4i_drv_probe(struct platform_device *pdev)
 
                count += sun4i_drv_add_endpoints(&pdev->dev, &match,
                                                pipeline);
+               of_node_put(pipeline);
 
                DRM_DEBUG_DRIVER("Queued %d outputs on pipeline %d\n",
                                 count, i);
index 39386f5..a71cf98 100644 (file)
@@ -1034,9 +1034,9 @@ out_unlock:
        return ret;
 }
 
-static bool ttm_bo_mem_compat(struct ttm_placement *placement,
-                             struct ttm_mem_reg *mem,
-                             uint32_t *new_flags)
+bool ttm_bo_mem_compat(struct ttm_placement *placement,
+                      struct ttm_mem_reg *mem,
+                      uint32_t *new_flags)
 {
        int i;
 
@@ -1068,6 +1068,7 @@ static bool ttm_bo_mem_compat(struct ttm_placement *placement,
 
        return false;
 }
+EXPORT_SYMBOL(ttm_bo_mem_compat);
 
 int ttm_bo_validate(struct ttm_buffer_object *bo,
                        struct ttm_placement *placement,
index 9b078a4..0cd8890 100644 (file)
@@ -49,6 +49,7 @@ int vmw_dmabuf_pin_in_placement(struct vmw_private *dev_priv,
 {
        struct ttm_buffer_object *bo = &buf->base;
        int ret;
+       uint32_t new_flags;
 
        ret = ttm_write_lock(&dev_priv->reservation_sem, interruptible);
        if (unlikely(ret != 0))
@@ -60,7 +61,12 @@ int vmw_dmabuf_pin_in_placement(struct vmw_private *dev_priv,
        if (unlikely(ret != 0))
                goto err;
 
-       ret = ttm_bo_validate(bo, placement, interruptible, false);
+       if (buf->pin_count > 0)
+               ret = ttm_bo_mem_compat(placement, &bo->mem,
+                                       &new_flags) == true ? 0 : -EINVAL;
+       else
+               ret = ttm_bo_validate(bo, placement, interruptible, false);
+
        if (!ret)
                vmw_bo_pin_reserved(buf, true);
 
@@ -91,6 +97,7 @@ int vmw_dmabuf_pin_in_vram_or_gmr(struct vmw_private *dev_priv,
 {
        struct ttm_buffer_object *bo = &buf->base;
        int ret;
+       uint32_t new_flags;
 
        ret = ttm_write_lock(&dev_priv->reservation_sem, interruptible);
        if (unlikely(ret != 0))
@@ -102,6 +109,12 @@ int vmw_dmabuf_pin_in_vram_or_gmr(struct vmw_private *dev_priv,
        if (unlikely(ret != 0))
                goto err;
 
+       if (buf->pin_count > 0) {
+               ret = ttm_bo_mem_compat(&vmw_vram_gmr_placement, &bo->mem,
+                                       &new_flags) == true ? 0 : -EINVAL;
+               goto out_unreserve;
+       }
+
        ret = ttm_bo_validate(bo, &vmw_vram_gmr_placement, interruptible,
                              false);
        if (likely(ret == 0) || ret == -ERESTARTSYS)
@@ -161,6 +174,7 @@ int vmw_dmabuf_pin_in_start_of_vram(struct vmw_private *dev_priv,
        struct ttm_placement placement;
        struct ttm_place place;
        int ret = 0;
+       uint32_t new_flags;
 
        place = vmw_vram_placement.placement[0];
        place.lpfn = bo->num_pages;
@@ -185,10 +199,15 @@ int vmw_dmabuf_pin_in_start_of_vram(struct vmw_private *dev_priv,
         */
        if (bo->mem.mem_type == TTM_PL_VRAM &&
            bo->mem.start < bo->num_pages &&
-           bo->mem.start > 0)
+           bo->mem.start > 0 &&
+           buf->pin_count == 0)
                (void) ttm_bo_validate(bo, &vmw_sys_placement, false, false);
 
-       ret = ttm_bo_validate(bo, &placement, interruptible, false);
+       if (buf->pin_count > 0)
+               ret = ttm_bo_mem_compat(&placement, &bo->mem,
+                                       &new_flags) == true ? 0 : -EINVAL;
+       else
+               ret = ttm_bo_validate(bo, &placement, interruptible, false);
 
        /* For some reason we didn't end up at the start of vram */
        WARN_ON(ret == 0 && bo->offset != 0);
index 9fcd820..8d528fc 100644 (file)
@@ -233,6 +233,7 @@ static int vmw_force_iommu;
 static int vmw_restrict_iommu;
 static int vmw_force_coherent;
 static int vmw_restrict_dma_mask;
+static int vmw_assume_16bpp;
 
 static int vmw_probe(struct pci_dev *, const struct pci_device_id *);
 static void vmw_master_init(struct vmw_master *);
@@ -249,6 +250,8 @@ MODULE_PARM_DESC(force_coherent, "Force coherent TTM pages");
 module_param_named(force_coherent, vmw_force_coherent, int, 0600);
 MODULE_PARM_DESC(restrict_dma_mask, "Restrict DMA mask to 44 bits with IOMMU");
 module_param_named(restrict_dma_mask, vmw_restrict_dma_mask, int, 0600);
+MODULE_PARM_DESC(assume_16bpp, "Assume 16-bpp when filtering modes");
+module_param_named(assume_16bpp, vmw_assume_16bpp, int, 0600);
 
 
 static void vmw_print_capabilities(uint32_t capabilities)
@@ -660,6 +663,8 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset)
        dev_priv->vram_start = pci_resource_start(dev->pdev, 1);
        dev_priv->mmio_start = pci_resource_start(dev->pdev, 2);
 
+       dev_priv->assume_16bpp = !!vmw_assume_16bpp;
+
        dev_priv->enable_fb = enable_fbdev;
 
        vmw_write(dev_priv, SVGA_REG_ID, SVGA_ID_2);
@@ -706,6 +711,13 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset)
                        vmw_read(dev_priv,
                                 SVGA_REG_SUGGESTED_GBOBJECT_MEM_SIZE_KB);
 
+               /*
+                * Workaround for low memory 2D VMs to compensate for the
+                * allocation taken by fbdev
+                */
+               if (!(dev_priv->capabilities & SVGA_CAP_3D))
+                       mem_size *= 2;
+
                dev_priv->max_mob_pages = mem_size * 1024 / PAGE_SIZE;
                dev_priv->prim_bb_mem =
                        vmw_read(dev_priv,
index 1980e2a..89fb194 100644 (file)
@@ -386,6 +386,7 @@ struct vmw_private {
        spinlock_t hw_lock;
        spinlock_t cap_lock;
        bool has_dx;
+       bool assume_16bpp;
 
        /*
         * VGA registers.
index 679a4cb..d2d9395 100644 (file)
@@ -517,28 +517,6 @@ static int vmw_fb_kms_framebuffer(struct fb_info *info)
 
        par->set_fb = &vfb->base;
 
-       if (!par->bo_ptr) {
-               /*
-                * Pin before mapping. Since we don't know in what placement
-                * to pin, call into KMS to do it for us.
-                */
-               ret = vfb->pin(vfb);
-               if (ret) {
-                       DRM_ERROR("Could not pin the fbdev framebuffer.\n");
-                       return ret;
-               }
-
-               ret = ttm_bo_kmap(&par->vmw_bo->base, 0,
-                                 par->vmw_bo->base.num_pages, &par->map);
-               if (ret) {
-                       vfb->unpin(vfb);
-                       DRM_ERROR("Could not map the fbdev framebuffer.\n");
-                       return ret;
-               }
-
-               par->bo_ptr = ttm_kmap_obj_virtual(&par->map, &par->bo_iowrite);
-       }
-
        return 0;
 }
 
@@ -601,6 +579,31 @@ static int vmw_fb_set_par(struct fb_info *info)
        if (ret)
                goto out_unlock;
 
+       if (!par->bo_ptr) {
+               struct vmw_framebuffer *vfb = vmw_framebuffer_to_vfb(set.fb);
+
+               /*
+                * Pin before mapping. Since we don't know in what placement
+                * to pin, call into KMS to do it for us.
+                */
+               ret = vfb->pin(vfb);
+               if (ret) {
+                       DRM_ERROR("Could not pin the fbdev framebuffer.\n");
+                       goto out_unlock;
+               }
+
+               ret = ttm_bo_kmap(&par->vmw_bo->base, 0,
+                                 par->vmw_bo->base.num_pages, &par->map);
+               if (ret) {
+                       vfb->unpin(vfb);
+                       DRM_ERROR("Could not map the fbdev framebuffer.\n");
+                       goto out_unlock;
+               }
+
+               par->bo_ptr = ttm_kmap_obj_virtual(&par->map, &par->bo_iowrite);
+       }
+
+
        vmw_fb_dirty_mark(par, par->fb_x, par->fb_y,
                          par->set_fb->width, par->set_fb->height);
 
index 55231cc..e29da45 100644 (file)
@@ -1553,14 +1553,10 @@ int vmw_du_connector_fill_modes(struct drm_connector *connector,
                DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC)
        };
        int i;
-       u32 assumed_bpp = 2;
+       u32 assumed_bpp = 4;
 
-       /*
-        * If using screen objects, then assume 32-bpp because that's what the
-        * SVGA device is assuming
-        */
-       if (dev_priv->active_display_unit == vmw_du_screen_object)
-               assumed_bpp = 4;
+       if (dev_priv->assume_16bpp)
+               assumed_bpp = 2;
 
        if (dev_priv->active_display_unit == vmw_du_screen_target) {
                max_width  = min(max_width,  dev_priv->stdu_max_width);
index f0374f9..e57a0ba 100644 (file)
@@ -300,6 +300,9 @@ static int vmw_recv_msg(struct rpc_channel *channel, void **msg,
                break;
        }
 
+       if (retries == RETRIES)
+               return -EINVAL;
+
        *msg_len = reply_len;
        *msg     = reply;
 
index 9ca818f..41932a7 100644 (file)
@@ -399,8 +399,10 @@ static int vmw_stdu_bind_fb(struct vmw_private *dev_priv,
 
        WARN_ON_ONCE(!stdu->defined);
 
-       if (!vfb->dmabuf && new_fb->width == mode->hdisplay &&
-           new_fb->height == mode->vdisplay)
+       new_vfbs = (vfb->dmabuf) ? NULL : vmw_framebuffer_to_vfbs(new_fb);
+
+       if (new_vfbs && new_vfbs->surface->base_size.width == mode->hdisplay &&
+           new_vfbs->surface->base_size.height == mode->vdisplay)
                new_content_type = SAME_AS_DISPLAY;
        else if (vfb->dmabuf)
                new_content_type = SEPARATE_DMA;
@@ -444,7 +446,6 @@ static int vmw_stdu_bind_fb(struct vmw_private *dev_priv,
                        content_srf.mip_levels[0]     = 1;
                        content_srf.multisample_count = 0;
                } else {
-                       new_vfbs = vmw_framebuffer_to_vfbs(new_fb);
                        content_srf = *new_vfbs->surface;
                }
 
@@ -464,7 +465,6 @@ static int vmw_stdu_bind_fb(struct vmw_private *dev_priv,
                        return ret;
                }
        } else if (new_content_type == SAME_AS_DISPLAY) {
-               new_vfbs = vmw_framebuffer_to_vfbs(new_fb);
                new_display_srf = vmw_surface_reference(new_vfbs->surface);
        }
 
index cc6439a..041050e 100644 (file)
@@ -1268,6 +1268,8 @@ static int qup_i2c_xfer_v2(struct i2c_adapter *adap,
                }
        }
 
+       idx = 0;
+
        do {
                if (msgs[idx].len == 0) {
                        ret = -EINVAL;
index 445398c..b126dba 100644 (file)
@@ -912,7 +912,7 @@ static int tegra_i2c_probe(struct platform_device *pdev)
        ret = tegra_i2c_init(i2c_dev);
        if (ret) {
                dev_err(&pdev->dev, "Failed to initialize i2c controller");
-               goto unprepare_div_clk;
+               goto disable_div_clk;
        }
 
        ret = devm_request_irq(&pdev->dev, i2c_dev->irq,
index e33022e..6e5fac6 100644 (file)
@@ -56,9 +56,7 @@ EXPORT_SYMBOL_GPL(__i2c_first_dynamic_bus_num);
  * The board info passed can safely be __initdata, but be careful of embedded
  * pointers (for platform_data, functions, etc) since that won't be copied.
  */
-int __init
-i2c_register_board_info(int busnum,
-       struct i2c_board_info const *info, unsigned len)
+int i2c_register_board_info(int busnum, struct i2c_board_info const *info, unsigned len)
 {
        int status;
 
index 26e7c51..c6a90b4 100644 (file)
@@ -145,7 +145,7 @@ static int i2c_mux_reg_probe_dt(struct regmux *mux,
                mux->data.idle_in_use = true;
 
        /* map address from "reg" if exists */
-       if (of_address_to_resource(np, 0, &res)) {
+       if (of_address_to_resource(np, 0, &res) == 0) {
                mux->data.reg_size = resource_size(&res);
                mux->data.reg = devm_ioremap_resource(&pdev->dev, &res);
                if (IS_ERR(mux->data.reg))
index a5793c8..60df4f8 100644 (file)
@@ -530,6 +530,7 @@ static PORT_PMA_ATTR(port_xmit_data             , 12, 32, 192);
 static PORT_PMA_ATTR(port_rcv_data                 , 13, 32, 224);
 static PORT_PMA_ATTR(port_xmit_packets             , 14, 32, 256);
 static PORT_PMA_ATTR(port_rcv_packets              , 15, 32, 288);
+static PORT_PMA_ATTR(port_xmit_wait                ,  0, 32, 320);
 
 /*
  * Counters added by extended set
@@ -560,6 +561,7 @@ static struct attribute *pma_attrs[] = {
        &port_pma_attr_port_rcv_data.attr.attr,
        &port_pma_attr_port_xmit_packets.attr.attr,
        &port_pma_attr_port_rcv_packets.attr.attr,
+       &port_pma_attr_port_xmit_wait.attr.attr,
        NULL
 };
 
@@ -579,6 +581,7 @@ static struct attribute *pma_attrs_ext[] = {
        &port_pma_attr_ext_port_xmit_data.attr.attr,
        &port_pma_attr_ext_port_rcv_data.attr.attr,
        &port_pma_attr_ext_port_xmit_packets.attr.attr,
+       &port_pma_attr_port_xmit_wait.attr.attr,
        &port_pma_attr_ext_port_rcv_packets.attr.attr,
        &port_pma_attr_ext_unicast_rcv_packets.attr.attr,
        &port_pma_attr_ext_unicast_xmit_packets.attr.attr,
@@ -604,6 +607,7 @@ static struct attribute *pma_attrs_noietf[] = {
        &port_pma_attr_ext_port_rcv_data.attr.attr,
        &port_pma_attr_ext_port_xmit_packets.attr.attr,
        &port_pma_attr_ext_port_rcv_packets.attr.attr,
+       &port_pma_attr_port_xmit_wait.attr.attr,
        NULL
 };
 
index f5de851..dad4d0e 100644 (file)
@@ -14113,8 +14113,14 @@ static int init_asic_data(struct hfi1_devdata *dd)
 {
        unsigned long flags;
        struct hfi1_devdata *tmp, *peer = NULL;
+       struct hfi1_asic_data *asic_data;
        int ret = 0;
 
+       /* pre-allocate the asic structure in case we are the first device */
+       asic_data = kzalloc(sizeof(*dd->asic_data), GFP_KERNEL);
+       if (!asic_data)
+               return -ENOMEM;
+
        spin_lock_irqsave(&hfi1_devs_lock, flags);
        /* Find our peer device */
        list_for_each_entry(tmp, &hfi1_dev_list, list) {
@@ -14126,18 +14132,14 @@ static int init_asic_data(struct hfi1_devdata *dd)
        }
 
        if (peer) {
+               /* use already allocated structure */
                dd->asic_data = peer->asic_data;
+               kfree(asic_data);
        } else {
-               dd->asic_data = kzalloc(sizeof(*dd->asic_data), GFP_KERNEL);
-               if (!dd->asic_data) {
-                       ret = -ENOMEM;
-                       goto done;
-               }
+               dd->asic_data = asic_data;
                mutex_init(&dd->asic_data->asic_resource_mutex);
        }
        dd->asic_data->dds[dd->hfi1_id] = dd; /* self back-pointer */
-
-done:
        spin_unlock_irqrestore(&hfi1_devs_lock, flags);
        return ret;
 }
index 1e503ad..be91f6f 100644 (file)
@@ -678,8 +678,7 @@ void hfi1_ud_rcv(struct hfi1_packet *packet)
        u32 tlen = packet->tlen;
        struct rvt_qp *qp = packet->qp;
        bool has_grh = rcv_flags & HFI1_HAS_GRH;
-       bool sc4_bit = has_sc4_bit(packet);
-       u8 sc;
+       u8 sc5 = hdr2sc((struct hfi1_message_header *)hdr, packet->rhf);
        u32 bth1;
        int is_mcast;
        struct ib_grh *grh = NULL;
@@ -697,10 +696,8 @@ void hfi1_ud_rcv(struct hfi1_packet *packet)
                 */
                struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
                u32 lqpn =  be32_to_cpu(ohdr->bth[1]) & RVT_QPN_MASK;
-               u8 sl, sc5;
+               u8 sl;
 
-               sc5 = (be16_to_cpu(hdr->lrh[0]) >> 12) & 0xf;
-               sc5 |= sc4_bit;
                sl = ibp->sc_to_sl[sc5];
 
                process_becn(ppd, sl, 0, lqpn, 0, IB_CC_SVCTYPE_UD);
@@ -717,10 +714,6 @@ void hfi1_ud_rcv(struct hfi1_packet *packet)
 
        if (!is_mcast && (opcode != IB_OPCODE_CNP) && bth1 & HFI1_FECN_SMASK) {
                u16 slid = be16_to_cpu(hdr->lrh[3]);
-               u8 sc5;
-
-               sc5 = (be16_to_cpu(hdr->lrh[0]) >> 12) & 0xf;
-               sc5 |= sc4_bit;
 
                return_cnp(ibp, qp, src_qp, pkey, dlid, slid, sc5, grh);
        }
@@ -745,10 +738,6 @@ void hfi1_ud_rcv(struct hfi1_packet *packet)
                if (qp->ibqp.qp_num > 1) {
                        struct hfi1_pportdata *ppd = ppd_from_ibp(ibp);
                        u16 slid;
-                       u8 sc5;
-
-                       sc5 = (be16_to_cpu(hdr->lrh[0]) >> 12) & 0xf;
-                       sc5 |= sc4_bit;
 
                        slid = be16_to_cpu(hdr->lrh[3]);
                        if (unlikely(rcv_pkey_check(ppd, pkey, sc5, slid))) {
@@ -790,10 +779,6 @@ void hfi1_ud_rcv(struct hfi1_packet *packet)
                /* Received on QP0, and so by definition, this is an SMP */
                struct opa_smp *smp = (struct opa_smp *)data;
                u16 slid = be16_to_cpu(hdr->lrh[3]);
-               u8 sc5;
-
-               sc5 = (be16_to_cpu(hdr->lrh[0]) >> 12) & 0xf;
-               sc5 |= sc4_bit;
 
                if (opa_smp_check(ibp, pkey, sc5, qp, slid, smp))
                        goto drop;
@@ -890,9 +875,7 @@ void hfi1_ud_rcv(struct hfi1_packet *packet)
        }
 
        wc.slid = be16_to_cpu(hdr->lrh[3]);
-       sc = (be16_to_cpu(hdr->lrh[0]) >> 12) & 0xf;
-       sc |= sc4_bit;
-       wc.sl = ibp->sc_to_sl[sc];
+       wc.sl = ibp->sc_to_sl[sc5];
 
        /*
         * Save the LMC lower bits if the destination LID is a unicast LID.
index c963cad..6e90813 100644 (file)
@@ -600,8 +600,7 @@ static enum i40iw_status_code i40iw_create_cqp(struct i40iw_device *iwdev)
        cqp_init_info.scratch_array = cqp->scratch_array;
        status = dev->cqp_ops->cqp_init(dev->cqp, &cqp_init_info);
        if (status) {
-               i40iw_pr_err("cqp init status %d maj_err %d min_err %d\n",
-                            status, maj_err, min_err);
+               i40iw_pr_err("cqp init status %d\n", status);
                goto exit;
        }
        status = dev->cqp_ops->cqp_create(dev->cqp, true, &maj_err, &min_err);
index 33959ed..283b64c 100644 (file)
@@ -1474,6 +1474,7 @@ static int i40iw_hw_alloc_stag(struct i40iw_device *iwdev, struct i40iw_mr *iwmr
        info->stag_idx = iwmr->stag >> I40IW_CQPSQ_STAG_IDX_SHIFT;
        info->pd_id = iwpd->sc_pd.pd_id;
        info->total_len = iwmr->length;
+       info->remote_access = true;
        cqp_info->cqp_cmd = OP_ALLOC_STAG;
        cqp_info->post_sq = 1;
        cqp_info->in.u.alloc_stag.dev = &iwdev->sc_dev;
index 8db8405..768085f 100644 (file)
@@ -232,7 +232,7 @@ static void stamp_send_wqe(struct mlx4_ib_qp *qp, int n, int size)
                }
        } else {
                ctrl = buf = get_send_wqe(qp, n & (qp->sq.wqe_cnt - 1));
-               s = (ctrl->fence_size & 0x3f) << 4;
+               s = (ctrl->qpn_vlan.fence_size & 0x3f) << 4;
                for (i = 64; i < s; i += 64) {
                        wqe = buf + i;
                        *wqe = cpu_to_be32(0xffffffff);
@@ -264,7 +264,7 @@ static void post_nop_wqe(struct mlx4_ib_qp *qp, int n, int size)
                inl->byte_count = cpu_to_be32(1 << 31 | (size - s - sizeof *inl));
        }
        ctrl->srcrb_flags = 0;
-       ctrl->fence_size = size / 16;
+       ctrl->qpn_vlan.fence_size = size / 16;
        /*
         * Make sure descriptor is fully written before setting ownership bit
         * (because HW can start executing as soon as we do).
@@ -1992,7 +1992,8 @@ static int __mlx4_ib_modify_qp(struct ib_qp *ibqp,
                        ctrl = get_send_wqe(qp, i);
                        ctrl->owner_opcode = cpu_to_be32(1 << 31);
                        if (qp->sq_max_wqes_per_wr == 1)
-                               ctrl->fence_size = 1 << (qp->sq.wqe_shift - 4);
+                               ctrl->qpn_vlan.fence_size =
+                                               1 << (qp->sq.wqe_shift - 4);
 
                        stamp_send_wqe(qp, i, 1 << qp->sq.wqe_shift);
                }
@@ -3169,8 +3170,8 @@ int mlx4_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
                wmb();
                *lso_wqe = lso_hdr_sz;
 
-               ctrl->fence_size = (wr->send_flags & IB_SEND_FENCE ?
-                                   MLX4_WQE_CTRL_FENCE : 0) | size;
+               ctrl->qpn_vlan.fence_size = (wr->send_flags & IB_SEND_FENCE ?
+                                            MLX4_WQE_CTRL_FENCE : 0) | size;
 
                /*
                 * Make sure descriptor is fully written before
index 3438e98..a529a45 100644 (file)
@@ -1431,6 +1431,9 @@ static int xpad_probe(struct usb_interface *intf, const struct usb_device_id *id
        int ep_irq_in_idx;
        int i, error;
 
+       if (intf->cur_altsetting->desc.bNumEndpoints != 2)
+               return -ENODEV;
+
        for (i = 0; xpad_device[i].idVendor; i++) {
                if ((le16_to_cpu(udev->descriptor.idVendor) == xpad_device[i].idVendor) &&
                    (le16_to_cpu(udev->descriptor.idProduct) == xpad_device[i].idProduct))
index b368b05..253df96 100644 (file)
@@ -157,11 +157,11 @@ static int rmi_function_match(struct device *dev, struct device_driver *drv)
 static void rmi_function_of_probe(struct rmi_function *fn)
 {
        char of_name[9];
+       struct device_node *node = fn->rmi_dev->xport->dev->of_node;
 
        snprintf(of_name, sizeof(of_name), "rmi4-f%02x",
                fn->fd.function_number);
-       fn->dev.of_node = of_find_node_by_name(
-                               fn->rmi_dev->xport->dev->of_node, of_name);
+       fn->dev.of_node = of_get_child_by_name(node, of_name);
 }
 #else
 static inline void rmi_function_of_probe(struct rmi_function *fn)
index 8dd3fb5..88e9155 100644 (file)
@@ -66,7 +66,7 @@ static int rmi_f12_read_sensor_tuning(struct f12_data *f12)
        struct rmi_device *rmi_dev = fn->rmi_dev;
        int ret;
        int offset;
-       u8 buf[14];
+       u8 buf[15];
        int pitch_x = 0;
        int pitch_y = 0;
        int clip_x_low = 0;
@@ -86,9 +86,10 @@ static int rmi_f12_read_sensor_tuning(struct f12_data *f12)
 
        offset = rmi_register_desc_calc_reg_offset(&f12->control_reg_desc, 8);
 
-       if (item->reg_size > 14) {
-               dev_err(&fn->dev, "F12 control8 should be 14 bytes, not: %ld\n",
-                       item->reg_size);
+       if (item->reg_size > sizeof(buf)) {
+               dev_err(&fn->dev,
+                       "F12 control8 should be no bigger than %zd bytes, not: %ld\n",
+                       sizeof(buf), item->reg_size);
                return -ENODEV;
        }
 
index 3c3dd78..fed73ee 100644 (file)
@@ -118,6 +118,13 @@ static int ts4800_parse_dt(struct platform_device *pdev,
                return -ENODEV;
        }
 
+       ts->regmap = syscon_node_to_regmap(syscon_np);
+       of_node_put(syscon_np);
+       if (IS_ERR(ts->regmap)) {
+               dev_err(dev, "cannot get parent's regmap\n");
+               return PTR_ERR(ts->regmap);
+       }
+
        error = of_property_read_u32_index(np, "syscon", 1, &reg);
        if (error < 0) {
                dev_err(dev, "no offset in syscon\n");
@@ -134,12 +141,6 @@ static int ts4800_parse_dt(struct platform_device *pdev,
 
        ts->bit = BIT(bit);
 
-       ts->regmap = syscon_node_to_regmap(syscon_np);
-       if (IS_ERR(ts->regmap)) {
-               dev_err(dev, "cannot get parent's regmap\n");
-               return PTR_ERR(ts->regmap);
-       }
-
        return 0;
 }
 
index 7295c19..6fe55d5 100644 (file)
 #include <linux/regmap.h>
 #include "tsc200x-core.h"
 
+static const struct input_id tsc2004_input_id = {
+       .bustype = BUS_I2C,
+       .product = 2004,
+};
+
 static int tsc2004_cmd(struct device *dev, u8 cmd)
 {
        u8 tx = TSC200X_CMD | TSC200X_CMD_12BIT | cmd;
@@ -42,7 +47,7 @@ static int tsc2004_probe(struct i2c_client *i2c,
                         const struct i2c_device_id *id)
 
 {
-       return tsc200x_probe(&i2c->dev, i2c->irq, BUS_I2C,
+       return tsc200x_probe(&i2c->dev, i2c->irq, &tsc2004_input_id,
                             devm_regmap_init_i2c(i2c, &tsc200x_regmap_config),
                             tsc2004_cmd);
 }
index b9f593d..f2c5f0e 100644 (file)
 #include <linux/regmap.h>
 #include "tsc200x-core.h"
 
+static const struct input_id tsc2005_input_id = {
+       .bustype = BUS_SPI,
+       .product = 2005,
+};
+
 static int tsc2005_cmd(struct device *dev, u8 cmd)
 {
        u8 tx = TSC200X_CMD | TSC200X_CMD_12BIT | cmd;
@@ -62,7 +67,7 @@ static int tsc2005_probe(struct spi_device *spi)
        if (error)
                return error;
 
-       return tsc200x_probe(&spi->dev, spi->irq, BUS_SPI,
+       return tsc200x_probe(&spi->dev, spi->irq, &tsc2005_input_id,
                             devm_regmap_init_spi(spi, &tsc200x_regmap_config),
                             tsc2005_cmd);
 }
index 15240c1..dfa7f1c 100644 (file)
@@ -450,7 +450,7 @@ static void tsc200x_close(struct input_dev *input)
        mutex_unlock(&ts->mutex);
 }
 
-int tsc200x_probe(struct device *dev, int irq, __u16 bustype,
+int tsc200x_probe(struct device *dev, int irq, const struct input_id *tsc_id,
                  struct regmap *regmap,
                  int (*tsc200x_cmd)(struct device *dev, u8 cmd))
 {
@@ -547,9 +547,18 @@ int tsc200x_probe(struct device *dev, int irq, __u16 bustype,
        snprintf(ts->phys, sizeof(ts->phys),
                 "%s/input-ts", dev_name(dev));
 
-       input_dev->name = "TSC200X touchscreen";
+       if (tsc_id->product == 2004) {
+               input_dev->name = "TSC200X touchscreen";
+       } else {
+               input_dev->name = devm_kasprintf(dev, GFP_KERNEL,
+                                                "TSC%04d touchscreen",
+                                                tsc_id->product);
+               if (!input_dev->name)
+                       return -ENOMEM;
+       }
+
        input_dev->phys = ts->phys;
-       input_dev->id.bustype = bustype;
+       input_dev->id = *tsc_id;
        input_dev->dev.parent = dev;
        input_dev->evbit[0] = BIT(EV_ABS) | BIT(EV_KEY);
        input_dev->keybit[BIT_WORD(BTN_TOUCH)] = BIT_MASK(BTN_TOUCH);
index 7a482d1..49a63a3 100644 (file)
@@ -70,7 +70,7 @@
 extern const struct regmap_config tsc200x_regmap_config;
 extern const struct dev_pm_ops tsc200x_pm_ops;
 
-int tsc200x_probe(struct device *dev, int irq, __u16 bustype,
+int tsc200x_probe(struct device *dev, int irq, const struct input_id *tsc_id,
                  struct regmap *regmap,
                  int (*tsc200x_cmd)(struct device *dev, u8 cmd));
 int tsc200x_remove(struct device *dev);
index 0c9191c..b6fc4bd 100644 (file)
@@ -155,6 +155,7 @@ static void parse_multi_touch(struct w8001 *w8001)
                bool touch = data[0] & (1 << i);
 
                input_mt_slot(dev, i);
+               input_mt_report_slot_state(dev, MT_TOOL_FINGER, touch);
                if (touch) {
                        x = (data[6 * i + 1] << 7) | data[6 * i + 2];
                        y = (data[6 * i + 3] << 7) | data[6 * i + 4];
@@ -522,6 +523,8 @@ static int w8001_setup_touch(struct w8001 *w8001, char *basename,
                                        0, touch.x, 0, 0);
                input_set_abs_params(dev, ABS_MT_POSITION_Y,
                                        0, touch.y, 0, 0);
+               input_set_abs_params(dev, ABS_MT_TOOL_TYPE,
+                                       0, MT_TOOL_MAX, 0, 0);
 
                strlcat(basename, " 2FG", basename_sz);
                if (w8001->max_pen_x && w8001->max_pen_y)
index d091def..59741ea 100644 (file)
@@ -1568,13 +1568,23 @@ static int __init amd_iommu_init_pci(void)
                        break;
        }
 
+       /*
+        * Order is important here to make sure any unity map requirements are
+        * fulfilled. The unity mappings are created and written to the device
+        * table during the amd_iommu_init_api() call.
+        *
+        * After that we call init_device_table_dma() to make sure any
+        * uninitialized DTE will block DMA, and in the end we flush the caches
+        * of all IOMMUs to make sure the changes to the device table are
+        * active.
+        */
+       ret = amd_iommu_init_api();
+
        init_device_table_dma();
 
        for_each_iommu(iommu)
                iommu_flush_all_caches(iommu);
 
-       ret = amd_iommu_init_api();
-
        if (!ret)
                print_iommu_info();
 
index cfe410e..323dac9 100644 (file)
@@ -4602,13 +4602,13 @@ static void free_all_cpu_cached_iovas(unsigned int cpu)
        for (i = 0; i < g_num_of_iommus; i++) {
                struct intel_iommu *iommu = g_iommus[i];
                struct dmar_domain *domain;
-               u16 did;
+               int did;
 
                if (!iommu)
                        continue;
 
                for (did = 0; did < cap_ndoms(iommu->cap); did++) {
-                       domain = get_iommu_domain(iommu, did);
+                       domain = get_iommu_domain(iommu, (u16)did);
 
                        if (!domain)
                                continue;
index 8a4adbe..70ed1d0 100644 (file)
@@ -718,7 +718,7 @@ static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq,
 
        spin_lock_irqsave(&gic_lock, flags);
        gic_map_to_pin(intr, gic_cpu_pin);
-       gic_map_to_vpe(intr, vpe);
+       gic_map_to_vpe(intr, mips_cm_vp_id(vpe));
        for (i = 0; i < min(gic_vpes, NR_CPUS); i++)
                clear_bit(intr, pcpu_masks[i].pcpu_mask);
        set_bit(intr, pcpu_masks[vpe].pcpu_mask);
@@ -959,7 +959,7 @@ int gic_ipi_domain_match(struct irq_domain *d, struct device_node *node,
        switch (bus_token) {
        case DOMAIN_BUS_IPI:
                is_ipi = d->bus_token == bus_token;
-               return to_of_node(d->fwnode) == node && is_ipi;
+               return (!node || to_of_node(d->fwnode) == node) && is_ipi;
                break;
        default:
                return 0;
index beb2841..3f1ab49 100644 (file)
@@ -779,11 +779,31 @@ static const struct v4l2_dv_timings_cap adv76xx_timings_cap_digital = {
                        V4L2_DV_BT_CAP_CUSTOM)
 };
 
-static inline const struct v4l2_dv_timings_cap *
-adv76xx_get_dv_timings_cap(struct v4l2_subdev *sd)
+/*
+ * Return the DV timings capabilities for the requested sink pad. As a special
+ * case, pad value -1 returns the capabilities for the currently selected input.
+ */
+static const struct v4l2_dv_timings_cap *
+adv76xx_get_dv_timings_cap(struct v4l2_subdev *sd, int pad)
 {
-       return is_digital_input(sd) ? &adv76xx_timings_cap_digital :
-                                     &adv7604_timings_cap_analog;
+       if (pad == -1) {
+               struct adv76xx_state *state = to_state(sd);
+
+               pad = state->selected_input;
+       }
+
+       switch (pad) {
+       case ADV76XX_PAD_HDMI_PORT_A:
+       case ADV7604_PAD_HDMI_PORT_B:
+       case ADV7604_PAD_HDMI_PORT_C:
+       case ADV7604_PAD_HDMI_PORT_D:
+               return &adv76xx_timings_cap_digital;
+
+       case ADV7604_PAD_VGA_RGB:
+       case ADV7604_PAD_VGA_COMP:
+       default:
+               return &adv7604_timings_cap_analog;
+       }
 }
 
 
@@ -1329,7 +1349,7 @@ static int stdi2dv_timings(struct v4l2_subdev *sd,
                const struct v4l2_bt_timings *bt = &v4l2_dv_timings_presets[i].bt;
 
                if (!v4l2_valid_dv_timings(&v4l2_dv_timings_presets[i],
-                                          adv76xx_get_dv_timings_cap(sd),
+                                          adv76xx_get_dv_timings_cap(sd, -1),
                                           adv76xx_check_dv_timings, NULL))
                        continue;
                if (vtotal(bt) != stdi->lcf + 1)
@@ -1430,18 +1450,22 @@ static int adv76xx_enum_dv_timings(struct v4l2_subdev *sd,
                return -EINVAL;
 
        return v4l2_enum_dv_timings_cap(timings,
-               adv76xx_get_dv_timings_cap(sd), adv76xx_check_dv_timings, NULL);
+               adv76xx_get_dv_timings_cap(sd, timings->pad),
+               adv76xx_check_dv_timings, NULL);
 }
 
 static int adv76xx_dv_timings_cap(struct v4l2_subdev *sd,
                        struct v4l2_dv_timings_cap *cap)
 {
        struct adv76xx_state *state = to_state(sd);
+       unsigned int pad = cap->pad;
 
        if (cap->pad >= state->source_pad)
                return -EINVAL;
 
-       *cap = *adv76xx_get_dv_timings_cap(sd);
+       *cap = *adv76xx_get_dv_timings_cap(sd, pad);
+       cap->pad = pad;
+
        return 0;
 }
 
@@ -1450,9 +1474,9 @@ static int adv76xx_dv_timings_cap(struct v4l2_subdev *sd,
 static void adv76xx_fill_optional_dv_timings_fields(struct v4l2_subdev *sd,
                struct v4l2_dv_timings *timings)
 {
-       v4l2_find_dv_timings_cap(timings, adv76xx_get_dv_timings_cap(sd),
-                       is_digital_input(sd) ? 250000 : 1000000,
-                       adv76xx_check_dv_timings, NULL);
+       v4l2_find_dv_timings_cap(timings, adv76xx_get_dv_timings_cap(sd, -1),
+                                is_digital_input(sd) ? 250000 : 1000000,
+                                adv76xx_check_dv_timings, NULL);
 }
 
 static unsigned int adv7604_read_hdmi_pixelclock(struct v4l2_subdev *sd)
@@ -1620,7 +1644,7 @@ static int adv76xx_s_dv_timings(struct v4l2_subdev *sd,
 
        bt = &timings->bt;
 
-       if (!v4l2_valid_dv_timings(timings, adv76xx_get_dv_timings_cap(sd),
+       if (!v4l2_valid_dv_timings(timings, adv76xx_get_dv_timings_cap(sd, -1),
                                   adv76xx_check_dv_timings, NULL))
                return -ERANGE;
 
index 3f9e6df..642b89c 100644 (file)
@@ -1472,7 +1472,7 @@ static long fm_st_receive(void *arg, struct sk_buff *skb)
  * Called by ST layer to indicate protocol registration completion
  * status.
  */
-static void fm_st_reg_comp_cb(void *arg, char data)
+static void fm_st_reg_comp_cb(void *arg, int data)
 {
        struct fmdev *fmdev;
 
index 87c1293..92d9d42 100644 (file)
@@ -1072,7 +1072,7 @@ static int airspy_probe(struct usb_interface *intf,
        if (ret) {
                dev_err(s->dev, "Failed to register as video device (%d)\n",
                                ret);
-               goto err_unregister_v4l2_dev;
+               goto err_free_controls;
        }
        dev_info(s->dev, "Registered as %s\n",
                        video_device_node_name(&s->vdev));
@@ -1081,7 +1081,6 @@ static int airspy_probe(struct usb_interface *intf,
 
 err_free_controls:
        v4l2_ctrl_handler_free(&s->hdl);
-err_unregister_v4l2_dev:
        v4l2_device_unregister(&s->v4l2_dev);
 err_free_mem:
        kfree(s);
index 28e5be2..528390f 100644 (file)
@@ -2171,7 +2171,7 @@ static int v4l_cropcap(const struct v4l2_ioctl_ops *ops,
         * The determine_valid_ioctls() call already should ensure
         * that this can never happen, but just in case...
         */
-       if (WARN_ON(!ops->vidioc_cropcap && !ops->vidioc_cropcap))
+       if (WARN_ON(!ops->vidioc_cropcap && !ops->vidioc_g_selection))
                return -ENOTTY;
 
        if (ops->vidioc_cropcap)
index dcdbd58..0005159 100644 (file)
@@ -141,7 +141,7 @@ static void st_send_frame(unsigned char chnl_id, struct st_data_s *st_gdata)
  * This function is being called with spin lock held, protocol drivers are
  * only expected to complete their waits and do nothing more than that.
  */
-static void st_reg_complete(struct st_data_s *st_gdata, char err)
+static void st_reg_complete(struct st_data_s *st_gdata, int err)
 {
        unsigned char i = 0;
        pr_info(" %s ", __func__);
index e62fde3..c5472e3 100644 (file)
@@ -355,8 +355,10 @@ static struct mmc_blk_ioc_data *mmc_blk_ioctl_copy_from_user(
                goto idata_err;
        }
 
-       if (!idata->buf_bytes)
+       if (!idata->buf_bytes) {
+               idata->buf = NULL;
                return idata;
+       }
 
        idata->buf = kmalloc(idata->buf_bytes, GFP_KERNEL);
        if (!idata->buf) {
@@ -1786,8 +1788,8 @@ static void mmc_blk_packed_hdr_wrq_prep(struct mmc_queue_req *mqrq,
 
        packed_cmd_hdr = packed->cmd_hdr;
        memset(packed_cmd_hdr, 0, sizeof(packed->cmd_hdr));
-       packed_cmd_hdr[0] = (packed->nr_entries << 16) |
-               (PACKED_CMD_WR << 8) | PACKED_CMD_VER;
+       packed_cmd_hdr[0] = cpu_to_le32((packed->nr_entries << 16) |
+               (PACKED_CMD_WR << 8) | PACKED_CMD_VER);
        hdr_blocks = mmc_large_sector(card) ? 8 : 1;
 
        /*
@@ -1801,14 +1803,14 @@ static void mmc_blk_packed_hdr_wrq_prep(struct mmc_queue_req *mqrq,
                        ((brq->data.blocks * brq->data.blksz) >=
                         card->ext_csd.data_tag_unit_size);
                /* Argument of CMD23 */
-               packed_cmd_hdr[(i * 2)] =
+               packed_cmd_hdr[(i * 2)] = cpu_to_le32(
                        (do_rel_wr ? MMC_CMD23_ARG_REL_WR : 0) |
                        (do_data_tag ? MMC_CMD23_ARG_TAG_REQ : 0) |
-                       blk_rq_sectors(prq);
+                       blk_rq_sectors(prq));
                /* Argument of CMD18 or CMD25 */
-               packed_cmd_hdr[((i * 2)) + 1] =
+               packed_cmd_hdr[((i * 2)) + 1] = cpu_to_le32(
                        mmc_card_blockaddr(card) ?
-                       blk_rq_pos(prq) : blk_rq_pos(prq) << 9;
+                       blk_rq_pos(prq) : blk_rq_pos(prq) << 9);
                packed->blocks += blk_rq_sectors(prq);
                i++;
        }
index 86fac3e..c763b40 100644 (file)
@@ -789,14 +789,16 @@ static int pxamci_probe(struct platform_device *pdev)
                gpio_direction_output(gpio_power,
                                      host->pdata->gpio_power_invert);
        }
-       if (gpio_is_valid(gpio_ro))
+       if (gpio_is_valid(gpio_ro)) {
                ret = mmc_gpio_request_ro(mmc, gpio_ro);
-       if (ret) {
-               dev_err(&pdev->dev, "Failed requesting gpio_ro %d\n", gpio_ro);
-               goto out;
-       } else {
-               mmc->caps2 |= host->pdata->gpio_card_ro_invert ?
-                       0 : MMC_CAP2_RO_ACTIVE_HIGH;
+               if (ret) {
+                       dev_err(&pdev->dev, "Failed requesting gpio_ro %d\n",
+                               gpio_ro);
+                       goto out;
+               } else {
+                       mmc->caps2 |= host->pdata->gpio_card_ro_invert ?
+                               0 : MMC_CAP2_RO_ACTIVE_HIGH;
+               }
        }
 
        if (gpio_is_valid(gpio_cd))
index 08e1588..a136da8 100644 (file)
@@ -1657,8 +1657,11 @@ static int omap_get_dt_info(struct device *dev, struct omap_nand_info *info)
 
        /* detect availability of ELM module. Won't be present pre-OMAP4 */
        info->elm_of_node = of_parse_phandle(child, "ti,elm-id", 0);
-       if (!info->elm_of_node)
-               dev_dbg(dev, "ti,elm-id not in DT\n");
+       if (!info->elm_of_node) {
+               info->elm_of_node = of_parse_phandle(child, "elm_id", 0);
+               if (!info->elm_of_node)
+                       dev_dbg(dev, "ti,elm-id not in DT\n");
+       }
 
        /* select ecc-scheme for NAND */
        if (of_property_read_string(child, "ti,nand-ecc-opt", &s)) {
index db760e8..b8df0f5 100644 (file)
@@ -446,7 +446,11 @@ static int bond_newlink(struct net *src_net, struct net_device *bond_dev,
        if (err < 0)
                return err;
 
-       return register_netdevice(bond_dev);
+       err = register_netdevice(bond_dev);
+
+       netif_carrier_off(bond_dev);
+
+       return err;
 }
 
 static size_t bond_get_size(const struct net_device *bond_dev)
index 444de66..bda37d3 100644 (file)
@@ -1581,6 +1581,18 @@ static const struct b53_chip_data b53_switch_chips[] = {
                .jumbo_pm_reg = B53_JUMBO_PORT_MASK,
                .jumbo_size_reg = B53_JUMBO_MAX_SIZE,
        },
+       {
+               .chip_id = BCM58XX_DEVICE_ID,
+               .dev_name = "BCM585xx/586xx/88312",
+               .vlans  = 4096,
+               .enabled_ports = 0x1ff,
+               .arl_entries = 4,
+               .cpu_port = B53_CPU_PORT_25,
+               .vta_regs = B53_VTA_REGS,
+               .duplex_reg = B53_DUPLEX_STAT_GE,
+               .jumbo_pm_reg = B53_JUMBO_PORT_MASK,
+               .jumbo_size_reg = B53_JUMBO_MAX_SIZE,
+       },
 };
 
 static int b53_switch_init(struct b53_device *dev)
index 5d8c602..835a744 100644 (file)
@@ -59,6 +59,7 @@ enum {
        BCM53012_DEVICE_ID = 0x53012,
        BCM53018_DEVICE_ID = 0x53018,
        BCM53019_DEVICE_ID = 0x53019,
+       BCM58XX_DEVICE_ID = 0x5800,
 };
 
 #define B53_N_PORTS    9
index 70fd472..3e2d4a5 100644 (file)
@@ -21,6 +21,7 @@
 #include <linux/delay.h>
 #include <linux/platform_device.h>
 #include <linux/platform_data/b53.h>
+#include <linux/of.h>
 
 #include "b53_priv.h"
 
@@ -356,12 +357,45 @@ static struct b53_io_ops b53_srab_ops = {
        .write64 = b53_srab_write64,
 };
 
+static const struct of_device_id b53_srab_of_match[] = {
+       { .compatible = "brcm,bcm53010-srab" },
+       { .compatible = "brcm,bcm53011-srab" },
+       { .compatible = "brcm,bcm53012-srab" },
+       { .compatible = "brcm,bcm53018-srab" },
+       { .compatible = "brcm,bcm53019-srab" },
+       { .compatible = "brcm,bcm5301x-srab" },
+       { .compatible = "brcm,bcm58522-srab", .data = (void *)BCM58XX_DEVICE_ID },
+       { .compatible = "brcm,bcm58525-srab", .data = (void *)BCM58XX_DEVICE_ID },
+       { .compatible = "brcm,bcm58535-srab", .data = (void *)BCM58XX_DEVICE_ID },
+       { .compatible = "brcm,bcm58622-srab", .data = (void *)BCM58XX_DEVICE_ID },
+       { .compatible = "brcm,bcm58623-srab", .data = (void *)BCM58XX_DEVICE_ID },
+       { .compatible = "brcm,bcm58625-srab", .data = (void *)BCM58XX_DEVICE_ID },
+       { .compatible = "brcm,bcm88312-srab", .data = (void *)BCM58XX_DEVICE_ID },
+       { .compatible = "brcm,nsp-srab", .data = (void *)BCM58XX_DEVICE_ID },
+       { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, b53_srab_of_match);
+
 static int b53_srab_probe(struct platform_device *pdev)
 {
+       struct b53_platform_data *pdata = pdev->dev.platform_data;
+       struct device_node *dn = pdev->dev.of_node;
+       const struct of_device_id *of_id = NULL;
        struct b53_srab_priv *priv;
        struct b53_device *dev;
        struct resource *r;
 
+       if (dn)
+               of_id = of_match_node(b53_srab_of_match, dn);
+
+       if (of_id) {
+               pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
+               if (!pdata)
+                       return -ENOMEM;
+
+               pdata->chip_id = (u32)(unsigned long)of_id->data;
+       }
+
        priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
        if (!priv)
                return -ENOMEM;
@@ -375,6 +409,9 @@ static int b53_srab_probe(struct platform_device *pdev)
        if (!dev)
                return -ENOMEM;
 
+       if (pdata)
+               dev->pdata = pdata;
+
        platform_set_drvdata(pdev, dev);
 
        return b53_switch_register(dev);
@@ -390,16 +427,6 @@ static int b53_srab_remove(struct platform_device *pdev)
        return 0;
 }
 
-static const struct of_device_id b53_srab_of_match[] = {
-       { .compatible = "brcm,bcm53010-srab" },
-       { .compatible = "brcm,bcm53011-srab" },
-       { .compatible = "brcm,bcm53012-srab" },
-       { .compatible = "brcm,bcm53018-srab" },
-       { .compatible = "brcm,bcm53019-srab" },
-       { .compatible = "brcm,bcm5301x-srab" },
-       { /* sentinel */ },
-};
-
 static struct platform_driver b53_srab_driver = {
        .probe = b53_srab_probe,
        .remove = b53_srab_remove,
index 5cb06f7..d36aedd 100644 (file)
@@ -216,107 +216,48 @@ static int mv88e6xxx_write(struct mv88e6xxx_chip *chip,
        return 0;
 }
 
-static int _mv88e6xxx_reg_read(struct mv88e6xxx_chip *chip, int addr, int reg)
+/* Indirect write to single pointer-data register with an Update bit */
+static int mv88e6xxx_update(struct mv88e6xxx_chip *chip, int addr, int reg,
+                           u16 update)
 {
        u16 val;
-       int err;
-
-       err = mv88e6xxx_read(chip, addr, reg, &val);
-       if (err)
-               return err;
+       int i, err;
 
-       return val;
-}
+       /* Wait until the previous operation is completed */
+       for (i = 0; i < 16; ++i) {
+               err = mv88e6xxx_read(chip, addr, reg, &val);
+               if (err)
+                       return err;
 
-static int mv88e6xxx_reg_read(struct mv88e6xxx_chip *chip, int addr, int reg)
-{
-       int ret;
+               if (!(val & BIT(15)))
+                       break;
+       }
 
-       mutex_lock(&chip->reg_lock);
-       ret = _mv88e6xxx_reg_read(chip, addr, reg);
-       mutex_unlock(&chip->reg_lock);
+       if (i == 16)
+               return -ETIMEDOUT;
 
-       return ret;
-}
+       /* Set the Update bit to trigger a write operation */
+       val = BIT(15) | update;
 
-static int _mv88e6xxx_reg_write(struct mv88e6xxx_chip *chip, int addr,
-                               int reg, u16 val)
-{
        return mv88e6xxx_write(chip, addr, reg, val);
 }
 
-static int mv88e6xxx_reg_write(struct mv88e6xxx_chip *chip, int addr,
-                              int reg, u16 val)
-{
-       int ret;
-
-       mutex_lock(&chip->reg_lock);
-       ret = _mv88e6xxx_reg_write(chip, addr, reg, val);
-       mutex_unlock(&chip->reg_lock);
-
-       return ret;
-}
-
-static int mv88e6xxx_set_addr_direct(struct dsa_switch *ds, u8 *addr)
+static int _mv88e6xxx_reg_read(struct mv88e6xxx_chip *chip, int addr, int reg)
 {
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
+       u16 val;
        int err;
 
-       err = mv88e6xxx_reg_write(chip, REG_GLOBAL, GLOBAL_MAC_01,
-                                 (addr[0] << 8) | addr[1]);
-       if (err)
-               return err;
-
-       err = mv88e6xxx_reg_write(chip, REG_GLOBAL, GLOBAL_MAC_23,
-                                 (addr[2] << 8) | addr[3]);
+       err = mv88e6xxx_read(chip, addr, reg, &val);
        if (err)
                return err;
 
-       return mv88e6xxx_reg_write(chip, REG_GLOBAL, GLOBAL_MAC_45,
-                                  (addr[4] << 8) | addr[5]);
-}
-
-static int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr)
-{
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-       int ret;
-       int i;
-
-       for (i = 0; i < 6; i++) {
-               int j;
-
-               /* Write the MAC address byte. */
-               ret = mv88e6xxx_reg_write(chip, REG_GLOBAL2, GLOBAL2_SWITCH_MAC,
-                                         GLOBAL2_SWITCH_MAC_BUSY |
-                                         (i << 8) | addr[i]);
-               if (ret)
-                       return ret;
-
-               /* Wait for the write to complete. */
-               for (j = 0; j < 16; j++) {
-                       ret = mv88e6xxx_reg_read(chip, REG_GLOBAL2,
-                                                GLOBAL2_SWITCH_MAC);
-                       if (ret < 0)
-                               return ret;
-
-                       if ((ret & GLOBAL2_SWITCH_MAC_BUSY) == 0)
-                               break;
-               }
-               if (j == 16)
-                       return -ETIMEDOUT;
-       }
-
-       return 0;
+       return val;
 }
 
-static int mv88e6xxx_set_addr(struct dsa_switch *ds, u8 *addr)
+static int _mv88e6xxx_reg_write(struct mv88e6xxx_chip *chip, int addr,
+                               int reg, u16 val)
 {
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-
-       if (mv88e6xxx_has(chip, MV88E6XXX_FLAG_SWITCH_MAC))
-               return mv88e6xxx_set_addr_indirect(ds, addr);
-       else
-               return mv88e6xxx_set_addr_direct(ds, addr);
+       return mv88e6xxx_write(chip, addr, reg, val);
 }
 
 static int mv88e6xxx_mdio_read_direct(struct mv88e6xxx_chip *chip,
@@ -897,259 +838,12 @@ static int _mv88e6xxx_wait(struct mv88e6xxx_chip *chip, int reg, int offset,
        return -ETIMEDOUT;
 }
 
-static int mv88e6xxx_wait(struct mv88e6xxx_chip *chip, int reg,
-                         int offset, u16 mask)
-{
-       int ret;
-
-       mutex_lock(&chip->reg_lock);
-       ret = _mv88e6xxx_wait(chip, reg, offset, mask);
-       mutex_unlock(&chip->reg_lock);
-
-       return ret;
-}
-
 static int mv88e6xxx_mdio_wait(struct mv88e6xxx_chip *chip)
 {
        return _mv88e6xxx_wait(chip, REG_GLOBAL2, GLOBAL2_SMI_OP,
                               GLOBAL2_SMI_OP_BUSY);
 }
 
-static int mv88e6xxx_eeprom_load_wait(struct dsa_switch *ds)
-{
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-
-       return mv88e6xxx_wait(chip, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
-                             GLOBAL2_EEPROM_OP_LOAD);
-}
-
-static int mv88e6xxx_eeprom_busy_wait(struct dsa_switch *ds)
-{
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-
-       return mv88e6xxx_wait(chip, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
-                             GLOBAL2_EEPROM_OP_BUSY);
-}
-
-static int mv88e6xxx_read_eeprom_word(struct dsa_switch *ds, int addr)
-{
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-       int ret;
-
-       mutex_lock(&chip->eeprom_mutex);
-
-       ret = mv88e6xxx_reg_write(chip, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
-                                 GLOBAL2_EEPROM_OP_READ |
-                                 (addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
-       if (ret < 0)
-               goto error;
-
-       ret = mv88e6xxx_eeprom_busy_wait(ds);
-       if (ret < 0)
-               goto error;
-
-       ret = mv88e6xxx_reg_read(chip, REG_GLOBAL2, GLOBAL2_EEPROM_DATA);
-error:
-       mutex_unlock(&chip->eeprom_mutex);
-       return ret;
-}
-
-static int mv88e6xxx_get_eeprom_len(struct dsa_switch *ds)
-{
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-
-       if (mv88e6xxx_has(chip, MV88E6XXX_FLAG_EEPROM))
-               return chip->eeprom_len;
-
-       return 0;
-}
-
-static int mv88e6xxx_get_eeprom(struct dsa_switch *ds,
-                               struct ethtool_eeprom *eeprom, u8 *data)
-{
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-       int offset;
-       int len;
-       int ret;
-
-       if (!mv88e6xxx_has(chip, MV88E6XXX_FLAG_EEPROM))
-               return -EOPNOTSUPP;
-
-       offset = eeprom->offset;
-       len = eeprom->len;
-       eeprom->len = 0;
-
-       eeprom->magic = 0xc3ec4951;
-
-       ret = mv88e6xxx_eeprom_load_wait(ds);
-       if (ret < 0)
-               return ret;
-
-       if (offset & 1) {
-               int word;
-
-               word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
-               if (word < 0)
-                       return word;
-
-               *data++ = (word >> 8) & 0xff;
-
-               offset++;
-               len--;
-               eeprom->len++;
-       }
-
-       while (len >= 2) {
-               int word;
-
-               word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
-               if (word < 0)
-                       return word;
-
-               *data++ = word & 0xff;
-               *data++ = (word >> 8) & 0xff;
-
-               offset += 2;
-               len -= 2;
-               eeprom->len += 2;
-       }
-
-       if (len) {
-               int word;
-
-               word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
-               if (word < 0)
-                       return word;
-
-               *data++ = word & 0xff;
-
-               offset++;
-               len--;
-               eeprom->len++;
-       }
-
-       return 0;
-}
-
-static int mv88e6xxx_eeprom_is_readonly(struct dsa_switch *ds)
-{
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-       int ret;
-
-       ret = mv88e6xxx_reg_read(chip, REG_GLOBAL2, GLOBAL2_EEPROM_OP);
-       if (ret < 0)
-               return ret;
-
-       if (!(ret & GLOBAL2_EEPROM_OP_WRITE_EN))
-               return -EROFS;
-
-       return 0;
-}
-
-static int mv88e6xxx_write_eeprom_word(struct dsa_switch *ds, int addr,
-                                      u16 data)
-{
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-       int ret;
-
-       mutex_lock(&chip->eeprom_mutex);
-
-       ret = mv88e6xxx_reg_write(chip, REG_GLOBAL2, GLOBAL2_EEPROM_DATA, data);
-       if (ret < 0)
-               goto error;
-
-       ret = mv88e6xxx_reg_write(chip, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
-                                 GLOBAL2_EEPROM_OP_WRITE |
-                                 (addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
-       if (ret < 0)
-               goto error;
-
-       ret = mv88e6xxx_eeprom_busy_wait(ds);
-error:
-       mutex_unlock(&chip->eeprom_mutex);
-       return ret;
-}
-
-static int mv88e6xxx_set_eeprom(struct dsa_switch *ds,
-                               struct ethtool_eeprom *eeprom, u8 *data)
-{
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-       int offset;
-       int ret;
-       int len;
-
-       if (!mv88e6xxx_has(chip, MV88E6XXX_FLAG_EEPROM))
-               return -EOPNOTSUPP;
-
-       if (eeprom->magic != 0xc3ec4951)
-               return -EINVAL;
-
-       ret = mv88e6xxx_eeprom_is_readonly(ds);
-       if (ret)
-               return ret;
-
-       offset = eeprom->offset;
-       len = eeprom->len;
-       eeprom->len = 0;
-
-       ret = mv88e6xxx_eeprom_load_wait(ds);
-       if (ret < 0)
-               return ret;
-
-       if (offset & 1) {
-               int word;
-
-               word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
-               if (word < 0)
-                       return word;
-
-               word = (*data++ << 8) | (word & 0xff);
-
-               ret = mv88e6xxx_write_eeprom_word(ds, offset >> 1, word);
-               if (ret < 0)
-                       return ret;
-
-               offset++;
-               len--;
-               eeprom->len++;
-       }
-
-       while (len >= 2) {
-               int word;
-
-               word = *data++;
-               word |= *data++ << 8;
-
-               ret = mv88e6xxx_write_eeprom_word(ds, offset >> 1, word);
-               if (ret < 0)
-                       return ret;
-
-               offset += 2;
-               len -= 2;
-               eeprom->len += 2;
-       }
-
-       if (len) {
-               int word;
-
-               word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
-               if (word < 0)
-                       return word;
-
-               word = (word & 0xff00) | *data++;
-
-               ret = mv88e6xxx_write_eeprom_word(ds, offset >> 1, word);
-               if (ret < 0)
-                       return ret;
-
-               offset++;
-               len--;
-               eeprom->len++;
-       }
-
-       return 0;
-}
-
 static int _mv88e6xxx_atu_wait(struct mv88e6xxx_chip *chip)
 {
        return _mv88e6xxx_wait(chip, REG_GLOBAL, GLOBAL_ATU_OP,
@@ -1460,9 +1154,6 @@ static void mv88e6xxx_port_stp_state_set(struct dsa_switch *ds, int port,
        int stp_state;
        int err;
 
-       if (!mv88e6xxx_has(chip, MV88E6XXX_FLAG_PORTSTATE))
-               return;
-
        switch (state) {
        case BR_STATE_DISABLED:
                stp_state = PORT_CONTROL_STATE_DISABLED;
@@ -2398,11 +2089,6 @@ static int mv88e6xxx_port_fdb_prepare(struct dsa_switch *ds, int port,
                                      const struct switchdev_obj_port_fdb *fdb,
                                      struct switchdev_trans *trans)
 {
-       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
-
-       if (!mv88e6xxx_has(chip, MV88E6XXX_FLAG_ATU))
-               return -EOPNOTSUPP;
-
        /* We don't need any dynamic resource from the kernel (yet),
         * so skip the prepare phase.
         */
@@ -2418,9 +2104,6 @@ static void mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
                GLOBAL_ATU_DATA_STATE_UC_STATIC;
        struct mv88e6xxx_chip *chip = ds_to_priv(ds);
 
-       if (!mv88e6xxx_has(chip, MV88E6XXX_FLAG_ATU))
-               return;
-
        mutex_lock(&chip->reg_lock);
        if (_mv88e6xxx_port_fdb_load(chip, port, fdb->addr, fdb->vid, state))
                netdev_err(ds->ports[port].netdev,
@@ -2434,9 +2117,6 @@ static int mv88e6xxx_port_fdb_del(struct dsa_switch *ds, int port,
        struct mv88e6xxx_chip *chip = ds_to_priv(ds);
        int ret;
 
-       if (!mv88e6xxx_has(chip, MV88E6XXX_FLAG_ATU))
-               return -EOPNOTSUPP;
-
        mutex_lock(&chip->reg_lock);
        ret = _mv88e6xxx_port_fdb_load(chip, port, fdb->addr, fdb->vid,
                                       GLOBAL_ATU_DATA_STATE_UNUSED);
@@ -2542,9 +2222,6 @@ static int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
        u16 fid;
        int err;
 
-       if (!mv88e6xxx_has(chip, MV88E6XXX_FLAG_ATU))
-               return -EOPNOTSUPP;
-
        mutex_lock(&chip->reg_lock);
 
        /* Dump port's default Filtering Information Database (VLAN ID 0) */
@@ -2587,9 +2264,6 @@ static int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
        struct mv88e6xxx_chip *chip = ds_to_priv(ds);
        int i, err = 0;
 
-       if (!mv88e6xxx_has(chip, MV88E6XXX_FLAG_VLANTABLE))
-               return -EOPNOTSUPP;
-
        mutex_lock(&chip->reg_lock);
 
        /* Assign the bridge and remap each port's VLANTable */
@@ -2614,9 +2288,6 @@ static void mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port)
        struct net_device *bridge = chip->ports[port].bridge_dev;
        int i;
 
-       if (!mv88e6xxx_has(chip, MV88E6XXX_FLAG_VLANTABLE))
-               return;
-
        mutex_lock(&chip->reg_lock);
 
        /* Unassign the bridge and remap each port's VLANTable */
@@ -2744,10 +2415,21 @@ static int mv88e6xxx_power_on_serdes(struct mv88e6xxx_chip *chip)
        return ret;
 }
 
-static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
+static int mv88e6xxx_port_read(struct mv88e6xxx_chip *chip, int port,
+                              int reg, u16 *val)
 {
-       struct dsa_switch *ds = chip->ds;
-       int ret;
+       int addr = chip->info->port_base_addr + port;
+
+       if (port >= chip->info->num_ports)
+               return -EINVAL;
+
+       return mv88e6xxx_read(chip, addr, reg, val);
+}
+
+static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
+{
+       struct dsa_switch *ds = chip->ds;
+       int ret;
        u16 reg;
 
        if (mv88e6xxx_6352_family(chip) || mv88e6xxx_6351_family(chip) ||
@@ -3016,13 +2698,70 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
        return 0;
 }
 
-static int mv88e6xxx_setup_global(struct mv88e6xxx_chip *chip)
+static int mv88e6xxx_g1_set_switch_mac(struct mv88e6xxx_chip *chip, u8 *addr)
+{
+       int err;
+
+       err = mv88e6xxx_write(chip, REG_GLOBAL, GLOBAL_MAC_01,
+                             (addr[0] << 8) | addr[1]);
+       if (err)
+               return err;
+
+       err = mv88e6xxx_write(chip, REG_GLOBAL, GLOBAL_MAC_23,
+                             (addr[2] << 8) | addr[3]);
+       if (err)
+               return err;
+
+       return mv88e6xxx_write(chip, REG_GLOBAL, GLOBAL_MAC_45,
+                              (addr[4] << 8) | addr[5]);
+}
+
+static int mv88e6xxx_g1_set_age_time(struct mv88e6xxx_chip *chip,
+                                    unsigned int msecs)
+{
+       const unsigned int coeff = chip->info->age_time_coeff;
+       const unsigned int min = 0x01 * coeff;
+       const unsigned int max = 0xff * coeff;
+       u8 age_time;
+       u16 val;
+       int err;
+
+       if (msecs < min || msecs > max)
+               return -ERANGE;
+
+       /* Round to nearest multiple of coeff */
+       age_time = (msecs + coeff / 2) / coeff;
+
+       err = mv88e6xxx_read(chip, REG_GLOBAL, GLOBAL_ATU_CONTROL, &val);
+       if (err)
+               return err;
+
+       /* AgeTime is 11:4 bits */
+       val &= ~0xff0;
+       val |= age_time << 4;
+
+       return mv88e6xxx_write(chip, REG_GLOBAL, GLOBAL_ATU_CONTROL, val);
+}
+
+static int mv88e6xxx_set_ageing_time(struct dsa_switch *ds,
+                                    unsigned int ageing_time)
+{
+       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
+       int err;
+
+       mutex_lock(&chip->reg_lock);
+       err = mv88e6xxx_g1_set_age_time(chip, ageing_time);
+       mutex_unlock(&chip->reg_lock);
+
+       return err;
+}
+
+static int mv88e6xxx_g1_setup(struct mv88e6xxx_chip *chip)
 {
        struct dsa_switch *ds = chip->ds;
        u32 upstream_port = dsa_upstream_port(ds);
        u16 reg;
        int err;
-       int i;
 
        /* Enable the PHY Polling Unit if present, don't discard any packets,
         * and mask all interrupt sources.
@@ -3054,12 +2793,26 @@ static int mv88e6xxx_setup_global(struct mv88e6xxx_chip *chip)
        if (err)
                return err;
 
+       /* Clear all the VTU and STU entries */
+       err = _mv88e6xxx_vtu_stu_flush(chip);
+       if (err < 0)
+               return err;
+
        /* Set the default address aging time to 5 minutes, and
         * enable address learn messages to be sent to all message
         * ports.
         */
-       err = _mv88e6xxx_reg_write(chip, REG_GLOBAL, GLOBAL_ATU_CONTROL,
-                                  0x0140 | GLOBAL_ATU_CONTROL_LEARN2ALL);
+       err = mv88e6xxx_write(chip, REG_GLOBAL, GLOBAL_ATU_CONTROL,
+                             GLOBAL_ATU_CONTROL_LEARN2ALL);
+       if (err)
+               return err;
+
+       err = mv88e6xxx_g1_set_age_time(chip, 300000);
+       if (err)
+               return err;
+
+       /* Clear all ATU entries */
+       err = _mv88e6xxx_atu_flush(chip, 0, true);
        if (err)
                return err;
 
@@ -3094,131 +2847,286 @@ static int mv88e6xxx_setup_global(struct mv88e6xxx_chip *chip)
        if (err)
                return err;
 
-       /* Send all frames with destination addresses matching
-        * 01:80:c2:00:00:0x to the CPU port.
-        */
-       err = _mv88e6xxx_reg_write(chip, REG_GLOBAL2, GLOBAL2_MGMT_EN_0X,
-                                  0xffff);
+       /* Clear the statistics counters for all ports */
+       err = _mv88e6xxx_reg_write(chip, REG_GLOBAL, GLOBAL_STATS_OP,
+                                  GLOBAL_STATS_OP_FLUSH_ALL);
        if (err)
                return err;
 
-       /* Ignore removed tag data on doubly tagged packets, disable
-        * flow control messages, force flow control priority to the
-        * highest, and send all special multicast frames to the CPU
-        * port at the highest priority.
-        */
-       err = _mv88e6xxx_reg_write(chip, REG_GLOBAL2, GLOBAL2_SWITCH_MGMT,
-                                  0x7 | GLOBAL2_SWITCH_MGMT_RSVD2CPU | 0x70 |
-                                  GLOBAL2_SWITCH_MGMT_FORCE_FLOW_CTRL_PRI);
+       /* Wait for the flush to complete. */
+       err = _mv88e6xxx_stats_wait(chip);
        if (err)
                return err;
 
-       /* Program the DSA routing table. */
-       for (i = 0; i < 32; i++) {
-               int nexthop = 0x1f;
+       return 0;
+}
 
-               if (i != ds->index && i < DSA_MAX_SWITCHES)
-                       nexthop = ds->rtable[i] & 0x1f;
+static int mv88e6xxx_g2_device_mapping_write(struct mv88e6xxx_chip *chip,
+                                            int target, int port)
+{
+       u16 val = (target << 8) | (port & 0xf);
+
+       return mv88e6xxx_update(chip, REG_GLOBAL2, GLOBAL2_DEVICE_MAPPING, val);
+}
 
-               err = _mv88e6xxx_reg_write(
-                       chip, REG_GLOBAL2,
-                       GLOBAL2_DEVICE_MAPPING,
-                       GLOBAL2_DEVICE_MAPPING_UPDATE |
-                       (i << GLOBAL2_DEVICE_MAPPING_TARGET_SHIFT) | nexthop);
+static int mv88e6xxx_g2_set_device_mapping(struct mv88e6xxx_chip *chip)
+{
+       int target, port;
+       int err;
+
+       /* Initialize the routing port to the 32 possible target devices */
+       for (target = 0; target < 32; ++target) {
+               port = 0xf;
+
+               if (target < DSA_MAX_SWITCHES) {
+                       port = chip->ds->rtable[target];
+                       if (port == DSA_RTABLE_NONE)
+                               port = 0xf;
+               }
+
+               err = mv88e6xxx_g2_device_mapping_write(chip, target, port);
                if (err)
-                       return err;
+                       break;
        }
 
-       /* Clear all trunk masks. */
-       for (i = 0; i < 8; i++) {
-               err = _mv88e6xxx_reg_write(chip, REG_GLOBAL2,
-                                          GLOBAL2_TRUNK_MASK,
-                                          0x8000 |
-                                          (i << GLOBAL2_TRUNK_MASK_NUM_SHIFT) |
-                                          ((1 << chip->info->num_ports) - 1));
+       return err;
+}
+
+static int mv88e6xxx_g2_trunk_mask_write(struct mv88e6xxx_chip *chip, int num,
+                                        bool hask, u16 mask)
+{
+       const u16 port_mask = BIT(chip->info->num_ports) - 1;
+       u16 val = (num << 12) | (mask & port_mask);
+
+       if (hask)
+               val |= GLOBAL2_TRUNK_MASK_HASK;
+
+       return mv88e6xxx_update(chip, REG_GLOBAL2, GLOBAL2_TRUNK_MASK, val);
+}
+
+static int mv88e6xxx_g2_trunk_mapping_write(struct mv88e6xxx_chip *chip, int id,
+                                           u16 map)
+{
+       const u16 port_mask = BIT(chip->info->num_ports) - 1;
+       u16 val = (id << 11) | (map & port_mask);
+
+       return mv88e6xxx_update(chip, REG_GLOBAL2, GLOBAL2_TRUNK_MAPPING, val);
+}
+
+static int mv88e6xxx_g2_clear_trunk(struct mv88e6xxx_chip *chip)
+{
+       const u16 port_mask = BIT(chip->info->num_ports) - 1;
+       int i, err;
+
+       /* Clear all eight possible Trunk Mask vectors */
+       for (i = 0; i < 8; ++i) {
+               err = mv88e6xxx_g2_trunk_mask_write(chip, i, false, port_mask);
                if (err)
                        return err;
        }
 
-       /* Clear all trunk mappings. */
-       for (i = 0; i < 16; i++) {
-               err = _mv88e6xxx_reg_write(
-                       chip, REG_GLOBAL2,
-                       GLOBAL2_TRUNK_MAPPING,
-                       GLOBAL2_TRUNK_MAPPING_UPDATE |
-                       (i << GLOBAL2_TRUNK_MAPPING_ID_SHIFT));
+       /* Clear all sixteen possible Trunk ID routing vectors */
+       for (i = 0; i < 16; ++i) {
+               err = mv88e6xxx_g2_trunk_mapping_write(chip, i, 0);
                if (err)
                        return err;
        }
 
-       if (mv88e6xxx_6352_family(chip) || mv88e6xxx_6351_family(chip) ||
-           mv88e6xxx_6165_family(chip) || mv88e6xxx_6097_family(chip) ||
-           mv88e6xxx_6320_family(chip)) {
-               /* Send all frames with destination addresses matching
-                * 01:80:c2:00:00:2x to the CPU port.
-                */
-               err = _mv88e6xxx_reg_write(chip, REG_GLOBAL2,
-                                          GLOBAL2_MGMT_EN_2X, 0xffff);
+       return 0;
+}
+
+static int mv88e6xxx_g2_clear_irl(struct mv88e6xxx_chip *chip)
+{
+       int port, err;
+
+       /* Init all Ingress Rate Limit resources of all ports */
+       for (port = 0; port < chip->info->num_ports; ++port) {
+               /* XXX newer chips (like 88E6390) have different 2-bit ops */
+               err = mv88e6xxx_write(chip, REG_GLOBAL2, GLOBAL2_IRL_CMD,
+                                     GLOBAL2_IRL_CMD_OP_INIT_ALL |
+                                     (port << 8));
                if (err)
-                       return err;
+                       break;
+
+               /* Wait for the operation to complete */
+               err = _mv88e6xxx_wait(chip, REG_GLOBAL2, GLOBAL2_IRL_CMD,
+                                     GLOBAL2_IRL_CMD_BUSY);
+               if (err)
+                       break;
+       }
+
+       return err;
+}
+
+/* Indirect write to the Switch MAC/WoL/WoF register */
+static int mv88e6xxx_g2_switch_mac_write(struct mv88e6xxx_chip *chip,
+                                        unsigned int pointer, u8 data)
+{
+       u16 val = (pointer << 8) | data;
+
+       return mv88e6xxx_update(chip, REG_GLOBAL2, GLOBAL2_SWITCH_MAC, val);
+}
+
+static int mv88e6xxx_g2_set_switch_mac(struct mv88e6xxx_chip *chip, u8 *addr)
+{
+       int i, err;
 
-               /* Initialise cross-chip port VLAN table to reset
-                * defaults.
+       for (i = 0; i < 6; i++) {
+               err = mv88e6xxx_g2_switch_mac_write(chip, i, addr[i]);
+               if (err)
+                       break;
+       }
+
+       return err;
+}
+
+static int mv88e6xxx_g2_pot_write(struct mv88e6xxx_chip *chip, int pointer,
+                                 u8 data)
+{
+       u16 val = (pointer << 8) | (data & 0x7);
+
+       return mv88e6xxx_update(chip, REG_GLOBAL2, GLOBAL2_PRIO_OVERRIDE, val);
+}
+
+static int mv88e6xxx_g2_clear_pot(struct mv88e6xxx_chip *chip)
+{
+       int i, err;
+
+       /* Clear all sixteen possible Priority Override entries */
+       for (i = 0; i < 16; i++) {
+               err = mv88e6xxx_g2_pot_write(chip, i, 0);
+               if (err)
+                       break;
+       }
+
+       return err;
+}
+
+static int mv88e6xxx_g2_eeprom_wait(struct mv88e6xxx_chip *chip)
+{
+       return _mv88e6xxx_wait(chip, REG_GLOBAL2, GLOBAL2_EEPROM_CMD,
+                              GLOBAL2_EEPROM_CMD_BUSY |
+                              GLOBAL2_EEPROM_CMD_RUNNING);
+}
+
+static int mv88e6xxx_g2_eeprom_cmd(struct mv88e6xxx_chip *chip, u16 cmd)
+{
+       int err;
+
+       err = mv88e6xxx_write(chip, REG_GLOBAL2, GLOBAL2_EEPROM_CMD, cmd);
+       if (err)
+               return err;
+
+       return mv88e6xxx_g2_eeprom_wait(chip);
+}
+
+static int mv88e6xxx_g2_eeprom_read16(struct mv88e6xxx_chip *chip,
+                                     u8 addr, u16 *data)
+{
+       u16 cmd = GLOBAL2_EEPROM_CMD_OP_READ | addr;
+       int err;
+
+       err = mv88e6xxx_g2_eeprom_wait(chip);
+       if (err)
+               return err;
+
+       err = mv88e6xxx_g2_eeprom_cmd(chip, cmd);
+       if (err)
+               return err;
+
+       return mv88e6xxx_read(chip, REG_GLOBAL2, GLOBAL2_EEPROM_DATA, data);
+}
+
+static int mv88e6xxx_g2_eeprom_write16(struct mv88e6xxx_chip *chip,
+                                      u8 addr, u16 data)
+{
+       u16 cmd = GLOBAL2_EEPROM_CMD_OP_WRITE | addr;
+       int err;
+
+       err = mv88e6xxx_g2_eeprom_wait(chip);
+       if (err)
+               return err;
+
+       err = mv88e6xxx_write(chip, REG_GLOBAL2, GLOBAL2_EEPROM_DATA, data);
+       if (err)
+               return err;
+
+       return mv88e6xxx_g2_eeprom_cmd(chip, cmd);
+}
+
+static int mv88e6xxx_g2_setup(struct mv88e6xxx_chip *chip)
+{
+       u16 reg;
+       int err;
+
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAG_G2_MGMT_EN_2X)) {
+               /* Consider the frames with reserved multicast destination
+                * addresses matching 01:80:c2:00:00:2x as MGMT.
                 */
-               err = _mv88e6xxx_reg_write(chip, REG_GLOBAL2,
-                                          GLOBAL2_PVT_ADDR, 0x9000);
+               err = mv88e6xxx_write(chip, REG_GLOBAL2, GLOBAL2_MGMT_EN_2X,
+                                     0xffff);
                if (err)
                        return err;
-
-               /* Clear the priority override table. */
-               for (i = 0; i < 16; i++) {
-                       err = _mv88e6xxx_reg_write(chip, REG_GLOBAL2,
-                                                  GLOBAL2_PRIO_OVERRIDE,
-                                                  0x8000 | (i << 8));
-                       if (err)
-                               return err;
-               }
        }
 
-       if (mv88e6xxx_6352_family(chip) || mv88e6xxx_6351_family(chip) ||
-           mv88e6xxx_6165_family(chip) || mv88e6xxx_6097_family(chip) ||
-           mv88e6xxx_6185_family(chip) || mv88e6xxx_6095_family(chip) ||
-           mv88e6xxx_6320_family(chip)) {
-               /* Disable ingress rate limiting by resetting all
-                * ingress rate limit registers to their initial
-                * state.
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAG_G2_MGMT_EN_0X)) {
+               /* Consider the frames with reserved multicast destination
+                * addresses matching 01:80:c2:00:00:0x as MGMT.
                 */
-               for (i = 0; i < chip->info->num_ports; i++) {
-                       err = _mv88e6xxx_reg_write(chip, REG_GLOBAL2,
-                                                  GLOBAL2_INGRESS_OP,
-                                                  0x9000 | (i << 8));
-                       if (err)
-                               return err;
-               }
+               err = mv88e6xxx_write(chip, REG_GLOBAL2, GLOBAL2_MGMT_EN_0X,
+                                     0xffff);
+               if (err)
+                       return err;
        }
 
-       /* Clear the statistics counters for all ports */
-       err = _mv88e6xxx_reg_write(chip, REG_GLOBAL, GLOBAL_STATS_OP,
-                                  GLOBAL_STATS_OP_FLUSH_ALL);
+       /* Ignore removed tag data on doubly tagged packets, disable
+        * flow control messages, force flow control priority to the
+        * highest, and send all special multicast frames to the CPU
+        * port at the highest priority.
+        */
+       reg = GLOBAL2_SWITCH_MGMT_FORCE_FLOW_CTRL_PRI | (0x7 << 4);
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAG_G2_MGMT_EN_0X) ||
+           mv88e6xxx_has(chip, MV88E6XXX_FLAG_G2_MGMT_EN_2X))
+               reg |= GLOBAL2_SWITCH_MGMT_RSVD2CPU | 0x7;
+       err = mv88e6xxx_write(chip, REG_GLOBAL2, GLOBAL2_SWITCH_MGMT, reg);
        if (err)
                return err;
 
-       /* Wait for the flush to complete. */
-       err = _mv88e6xxx_stats_wait(chip);
+       /* Program the DSA routing table. */
+       err = mv88e6xxx_g2_set_device_mapping(chip);
        if (err)
                return err;
 
-       /* Clear all ATU entries */
-       err = _mv88e6xxx_atu_flush(chip, 0, true);
+       /* Clear all trunk masks and mapping. */
+       err = mv88e6xxx_g2_clear_trunk(chip);
        if (err)
                return err;
 
-       /* Clear all the VTU and STU entries */
-       err = _mv88e6xxx_vtu_stu_flush(chip);
-       if (err < 0)
-               return err;
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAGS_IRL)) {
+               /* Disable ingress rate limiting by resetting all per port
+                * ingress rate limit resources to their initial state.
+                */
+               err = mv88e6xxx_g2_clear_irl(chip);
+                       if (err)
+                               return err;
+       }
 
-       return err;
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAGS_PVT)) {
+               /* Initialize Cross-chip Port VLAN Table to reset defaults */
+               err = mv88e6xxx_write(chip, REG_GLOBAL2, GLOBAL2_PVT_ADDR,
+                                     GLOBAL2_PVT_ADDR_OP_INIT_ONES);
+               if (err)
+                       return err;
+       }
+
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAG_G2_POT)) {
+               /* Clear the priority override table. */
+               err = mv88e6xxx_g2_clear_pot(chip);
+               if (err)
+                       return err;
+       }
+
+       return 0;
 }
 
 static int mv88e6xxx_setup(struct dsa_switch *ds)
@@ -3230,21 +3138,27 @@ static int mv88e6xxx_setup(struct dsa_switch *ds)
        chip->ds = ds;
        ds->slave_mii_bus = chip->mdio_bus;
 
-       if (mv88e6xxx_has(chip, MV88E6XXX_FLAG_EEPROM))
-               mutex_init(&chip->eeprom_mutex);
-
        mutex_lock(&chip->reg_lock);
 
        err = mv88e6xxx_switch_reset(chip);
        if (err)
                goto unlock;
 
-       err = mv88e6xxx_setup_global(chip);
+       /* Setup Switch Port Registers */
+       for (i = 0; i < chip->info->num_ports; i++) {
+               err = mv88e6xxx_setup_port(chip, i);
+               if (err)
+                       goto unlock;
+       }
+
+       /* Setup Switch Global 1 Registers */
+       err = mv88e6xxx_g1_setup(chip);
        if (err)
                goto unlock;
 
-       for (i = 0; i < chip->info->num_ports; i++) {
-               err = mv88e6xxx_setup_port(chip, i);
+       /* Setup Switch Global 2 Registers */
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAG_GLOBAL2)) {
+               err = mv88e6xxx_g2_setup(chip);
                if (err)
                        goto unlock;
        }
@@ -3255,6 +3169,24 @@ unlock:
        return err;
 }
 
+static int mv88e6xxx_set_addr(struct dsa_switch *ds, u8 *addr)
+{
+       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
+       int err;
+
+       mutex_lock(&chip->reg_lock);
+
+       /* Has an indirect Switch MAC/WoL/WoF register in Global 2? */
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAG_G2_SWITCH_MAC))
+               err = mv88e6xxx_g2_set_switch_mac(chip, addr);
+       else
+               err = mv88e6xxx_g1_set_switch_mac(chip, addr);
+
+       mutex_unlock(&chip->reg_lock);
+
+       return err;
+}
+
 static int mv88e6xxx_mdio_page_read(struct dsa_switch *ds, int port, int page,
                                    int reg)
 {
@@ -3528,6 +3460,173 @@ static int mv88e6xxx_get_temp_alarm(struct dsa_switch *ds, bool *alarm)
 }
 #endif /* CONFIG_NET_DSA_HWMON */
 
+static int mv88e6xxx_get_eeprom_len(struct dsa_switch *ds)
+{
+       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
+
+       return chip->eeprom_len;
+}
+
+static int mv88e6xxx_get_eeprom16(struct mv88e6xxx_chip *chip,
+                                 struct ethtool_eeprom *eeprom, u8 *data)
+{
+       unsigned int offset = eeprom->offset;
+       unsigned int len = eeprom->len;
+       u16 val;
+       int err;
+
+       eeprom->len = 0;
+
+       if (offset & 1) {
+               err = mv88e6xxx_g2_eeprom_read16(chip, offset >> 1, &val);
+               if (err)
+                       return err;
+
+               *data++ = (val >> 8) & 0xff;
+
+               offset++;
+               len--;
+               eeprom->len++;
+       }
+
+       while (len >= 2) {
+               err = mv88e6xxx_g2_eeprom_read16(chip, offset >> 1, &val);
+               if (err)
+                       return err;
+
+               *data++ = val & 0xff;
+               *data++ = (val >> 8) & 0xff;
+
+               offset += 2;
+               len -= 2;
+               eeprom->len += 2;
+       }
+
+       if (len) {
+               err = mv88e6xxx_g2_eeprom_read16(chip, offset >> 1, &val);
+               if (err)
+                       return err;
+
+               *data++ = val & 0xff;
+
+               offset++;
+               len--;
+               eeprom->len++;
+       }
+
+       return 0;
+}
+
+static int mv88e6xxx_get_eeprom(struct dsa_switch *ds,
+                               struct ethtool_eeprom *eeprom, u8 *data)
+{
+       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
+       int err;
+
+       mutex_lock(&chip->reg_lock);
+
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAGS_EEPROM16))
+               err = mv88e6xxx_get_eeprom16(chip, eeprom, data);
+       else
+               err = -EOPNOTSUPP;
+
+       mutex_unlock(&chip->reg_lock);
+
+       if (err)
+               return err;
+
+       eeprom->magic = 0xc3ec4951;
+
+       return 0;
+}
+
+static int mv88e6xxx_set_eeprom16(struct mv88e6xxx_chip *chip,
+                                 struct ethtool_eeprom *eeprom, u8 *data)
+{
+       unsigned int offset = eeprom->offset;
+       unsigned int len = eeprom->len;
+       u16 val;
+       int err;
+
+       /* Ensure the RO WriteEn bit is set */
+       err = mv88e6xxx_read(chip, REG_GLOBAL2, GLOBAL2_EEPROM_CMD, &val);
+       if (err)
+               return err;
+
+       if (!(val & GLOBAL2_EEPROM_CMD_WRITE_EN))
+               return -EROFS;
+
+       eeprom->len = 0;
+
+       if (offset & 1) {
+               err = mv88e6xxx_g2_eeprom_read16(chip, offset >> 1, &val);
+               if (err)
+                       return err;
+
+               val = (*data++ << 8) | (val & 0xff);
+
+               err = mv88e6xxx_g2_eeprom_write16(chip, offset >> 1, val);
+               if (err)
+                       return err;
+
+               offset++;
+               len--;
+               eeprom->len++;
+       }
+
+       while (len >= 2) {
+               val = *data++;
+               val |= *data++ << 8;
+
+               err = mv88e6xxx_g2_eeprom_write16(chip, offset >> 1, val);
+               if (err)
+                       return err;
+
+               offset += 2;
+               len -= 2;
+               eeprom->len += 2;
+       }
+
+       if (len) {
+               err = mv88e6xxx_g2_eeprom_read16(chip, offset >> 1, &val);
+               if (err)
+                       return err;
+
+               val = (val & 0xff00) | *data++;
+
+               err = mv88e6xxx_g2_eeprom_write16(chip, offset >> 1, val);
+               if (err)
+                       return err;
+
+               offset++;
+               len--;
+               eeprom->len++;
+       }
+
+       return 0;
+}
+
+static int mv88e6xxx_set_eeprom(struct dsa_switch *ds,
+                               struct ethtool_eeprom *eeprom, u8 *data)
+{
+       struct mv88e6xxx_chip *chip = ds_to_priv(ds);
+       int err;
+
+       if (eeprom->magic != 0xc3ec4951)
+               return -EINVAL;
+
+       mutex_lock(&chip->reg_lock);
+
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAGS_EEPROM16))
+               err = mv88e6xxx_set_eeprom16(chip, eeprom, data);
+       else
+               err = -EOPNOTSUPP;
+
+       mutex_unlock(&chip->reg_lock);
+
+       return err;
+}
+
 static const struct mv88e6xxx_info mv88e6xxx_table[] = {
        [MV88E6085] = {
                .prod_num = PORT_SWITCH_ID_PROD_NUM_6085,
@@ -3536,6 +3635,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 10,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6097,
        },
 
@@ -3546,6 +3646,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 256,
                .num_ports = 11,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6095,
        },
 
@@ -3556,6 +3657,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 3,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6165,
        },
 
@@ -3566,6 +3668,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 256,
                .num_ports = 8,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6185,
        },
 
@@ -3576,6 +3679,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 6,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6165,
        },
 
@@ -3586,6 +3690,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 6,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6165,
        },
 
@@ -3596,6 +3701,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 7,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6351,
        },
 
@@ -3606,6 +3712,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 7,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6352,
        },
 
@@ -3616,6 +3723,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 7,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6351,
        },
 
@@ -3626,6 +3734,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 7,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6352,
        },
 
@@ -3636,6 +3745,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 256,
                .num_ports = 10,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6185,
        },
 
@@ -3646,6 +3756,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 7,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6352,
        },
 
@@ -3656,6 +3767,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 7,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6320,
        },
 
@@ -3666,6 +3778,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 7,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6320,
        },
 
@@ -3676,6 +3789,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 7,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6351,
        },
 
@@ -3686,6 +3800,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 7,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6351,
        },
 
@@ -3696,6 +3811,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
                .num_databases = 4096,
                .num_ports = 7,
                .port_base_addr = 0x10,
+               .age_time_coeff = 15000,
                .flags = MV88E6XXX_FLAGS_FAMILY_6352,
        },
 };
@@ -3714,12 +3830,15 @@ static const struct mv88e6xxx_info *mv88e6xxx_lookup_info(unsigned int prod_num)
 static int mv88e6xxx_detect(struct mv88e6xxx_chip *chip)
 {
        const struct mv88e6xxx_info *info;
-       int id, prod_num, rev;
+       unsigned int prod_num, rev;
+       u16 id;
+       int err;
 
-       id = mv88e6xxx_reg_read(chip, chip->info->port_base_addr,
-                               PORT_SWITCH_ID);
-       if (id < 0)
-               return id;
+       mutex_lock(&chip->reg_lock);
+       err = mv88e6xxx_port_read(chip, 0, PORT_SWITCH_ID, &id);
+       mutex_unlock(&chip->reg_lock);
+       if (err)
+               return err;
 
        prod_num = (id & 0xfff0) >> 4;
        rev = id & 0x000f;
@@ -3834,6 +3953,7 @@ static struct dsa_switch_driver mv88e6xxx_switch_driver = {
        .set_eeprom             = mv88e6xxx_set_eeprom,
        .get_regs_len           = mv88e6xxx_get_regs_len,
        .get_regs               = mv88e6xxx_get_regs,
+       .set_ageing_time        = mv88e6xxx_set_ageing_time,
        .port_bridge_join       = mv88e6xxx_port_bridge_join,
        .port_bridge_leave      = mv88e6xxx_port_bridge_leave,
        .port_stp_state_set     = mv88e6xxx_port_stp_state_set,
@@ -3903,7 +4023,7 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
        if (IS_ERR(chip->reset))
                return PTR_ERR(chip->reset);
 
-       if (mv88e6xxx_has(chip, MV88E6XXX_FLAG_EEPROM) &&
+       if (mv88e6xxx_has(chip, MV88E6XXX_FLAGS_EEPROM16) &&
            !of_property_read_u32(np, "eeprom-length", &eeprom_len))
                chip->eeprom_len = eeprom_len;
 
index 83f0662..48d6ea7 100644 (file)
 #define GLOBAL2_TRUNK_MASK     0x07
 #define GLOBAL2_TRUNK_MASK_UPDATE              BIT(15)
 #define GLOBAL2_TRUNK_MASK_NUM_SHIFT           12
+#define GLOBAL2_TRUNK_MASK_HASK                        BIT(11)
 #define GLOBAL2_TRUNK_MAPPING  0x08
 #define GLOBAL2_TRUNK_MAPPING_UPDATE           BIT(15)
 #define GLOBAL2_TRUNK_MAPPING_ID_SHIFT         11
-#define GLOBAL2_INGRESS_OP     0x09
-#define GLOBAL2_INGRESS_DATA   0x0a
+#define GLOBAL2_IRL_CMD                0x09
+#define GLOBAL2_IRL_CMD_BUSY   BIT(15)
+#define GLOBAL2_IRL_CMD_OP_INIT_ALL    ((0x001 << 12) | GLOBAL2_IRL_CMD_BUSY)
+#define GLOBAL2_IRL_CMD_OP_INIT_SEL    ((0x010 << 12) | GLOBAL2_IRL_CMD_BUSY)
+#define GLOBAL2_IRL_CMD_OP_WRITE_SEL   ((0x011 << 12) | GLOBAL2_IRL_CMD_BUSY)
+#define GLOBAL2_IRL_CMD_OP_READ_SEL    ((0x100 << 12) | GLOBAL2_IRL_CMD_BUSY)
+#define GLOBAL2_IRL_DATA       0x0a
 #define GLOBAL2_PVT_ADDR       0x0b
+#define GLOBAL2_PVT_ADDR_BUSY  BIT(15)
+#define GLOBAL2_PVT_ADDR_OP_INIT_ONES  ((0x01 << 12) | GLOBAL2_PVT_ADDR_BUSY)
+#define GLOBAL2_PVT_ADDR_OP_WRITE_PVLAN        ((0x03 << 12) | GLOBAL2_PVT_ADDR_BUSY)
+#define GLOBAL2_PVT_ADDR_OP_READ       ((0x04 << 12) | GLOBAL2_PVT_ADDR_BUSY)
 #define GLOBAL2_PVT_DATA       0x0c
 #define GLOBAL2_SWITCH_MAC     0x0d
-#define GLOBAL2_SWITCH_MAC_BUSY BIT(15)
 #define GLOBAL2_ATU_STATS      0x0e
 #define GLOBAL2_PRIO_OVERRIDE  0x0f
 #define GLOBAL2_PRIO_OVERRIDE_FORCE_SNOOP      BIT(7)
 #define GLOBAL2_PRIO_OVERRIDE_SNOOP_SHIFT      4
 #define GLOBAL2_PRIO_OVERRIDE_FORCE_ARP                BIT(3)
 #define GLOBAL2_PRIO_OVERRIDE_ARP_SHIFT                0
-#define GLOBAL2_EEPROM_OP      0x14
-#define GLOBAL2_EEPROM_OP_BUSY         BIT(15)
-#define GLOBAL2_EEPROM_OP_WRITE                ((3 << 12) | GLOBAL2_EEPROM_OP_BUSY)
-#define GLOBAL2_EEPROM_OP_READ         ((4 << 12) | GLOBAL2_EEPROM_OP_BUSY)
-#define GLOBAL2_EEPROM_OP_LOAD         BIT(11)
-#define GLOBAL2_EEPROM_OP_WRITE_EN     BIT(10)
-#define GLOBAL2_EEPROM_OP_ADDR_MASK    0xff
+#define GLOBAL2_EEPROM_CMD             0x14
+#define GLOBAL2_EEPROM_CMD_BUSY                BIT(15)
+#define GLOBAL2_EEPROM_CMD_OP_WRITE    ((0x3 << 12) | GLOBAL2_EEPROM_CMD_BUSY)
+#define GLOBAL2_EEPROM_CMD_OP_READ     ((0x4 << 12) | GLOBAL2_EEPROM_CMD_BUSY)
+#define GLOBAL2_EEPROM_CMD_OP_LOAD     ((0x6 << 12) | GLOBAL2_EEPROM_CMD_BUSY)
+#define GLOBAL2_EEPROM_CMD_RUNNING     BIT(11)
+#define GLOBAL2_EEPROM_CMD_WRITE_EN    BIT(10)
+#define GLOBAL2_EEPROM_CMD_ADDR_MASK   0xff
 #define GLOBAL2_EEPROM_DATA    0x15
 #define GLOBAL2_PTP_AVB_OP     0x16
 #define GLOBAL2_PTP_AVB_DATA   0x17
@@ -374,19 +384,24 @@ enum mv88e6xxx_family {
 };
 
 enum mv88e6xxx_cap {
-       /* Address Translation Unit.
-        * The ATU is used to lookup and learn MAC addresses. See GLOBAL_ATU_OP.
-        */
-       MV88E6XXX_CAP_ATU,
-
        /* Energy Efficient Ethernet.
         */
        MV88E6XXX_CAP_EEE,
 
-       /* EEPROM Command and Data registers.
-        * See GLOBAL2_EEPROM_OP and GLOBAL2_EEPROM_DATA.
+       /* Switch Global 2 Registers.
+        * The device contains a second set of global 16-bit registers.
         */
-       MV88E6XXX_CAP_EEPROM,
+       MV88E6XXX_CAP_GLOBAL2,
+       MV88E6XXX_CAP_G2_MGMT_EN_2X,    /* (0x02) MGMT Enable Register 2x */
+       MV88E6XXX_CAP_G2_MGMT_EN_0X,    /* (0x03) MGMT Enable Register 0x */
+       MV88E6XXX_CAP_G2_IRL_CMD,       /* (0x09) Ingress Rate Command */
+       MV88E6XXX_CAP_G2_IRL_DATA,      /* (0x0a) Ingress Rate Data */
+       MV88E6XXX_CAP_G2_PVT_ADDR,      /* (0x0b) Cross Chip Port VLAN Addr */
+       MV88E6XXX_CAP_G2_PVT_DATA,      /* (0x0c) Cross Chip Port VLAN Data */
+       MV88E6XXX_CAP_G2_SWITCH_MAC,    /* (0x0d) Switch MAC/WoL/WoF */
+       MV88E6XXX_CAP_G2_POT,           /* (0x0f) Priority Override Table */
+       MV88E6XXX_CAP_G2_EEPROM_CMD,    /* (0x14) EEPROM Command */
+       MV88E6XXX_CAP_G2_EEPROM_DATA,   /* (0x15) EEPROM Data */
 
        /* Multi-chip Addressing Mode.
         * Some chips require an indirect SMI access when their SMI device
@@ -394,11 +409,6 @@ enum mv88e6xxx_cap {
         */
        MV88E6XXX_CAP_MULTI_CHIP,
 
-       /* Port State Filtering for 802.1D Spanning Tree.
-        * See PORT_CONTROL_STATE_* values in the PORT_CONTROL register.
-        */
-       MV88E6XXX_CAP_PORTSTATE,
-
        /* PHY Polling Unit.
         * See GLOBAL_CONTROL_PPU_ENABLE and GLOBAL_STATUS_PPU_POLLING.
         */
@@ -417,25 +427,12 @@ enum mv88e6xxx_cap {
         */
        MV88E6XXX_CAP_STU,
 
-       /* Switch MAC/WoL/WoF register.
-        * This requires an indirect access to set the switch MAC address
-        * through GLOBAL2_SWITCH_MAC, otherwise GLOBAL_MAC_01, GLOBAL_MAC_23,
-        * and GLOBAL_MAC_45 are used with a direct access.
-        */
-       MV88E6XXX_CAP_SWITCH_MAC_WOL_WOF,
-
        /* Internal temperature sensor.
         * Available from any enabled port's PHY register 26, page 6.
         */
        MV88E6XXX_CAP_TEMP,
        MV88E6XXX_CAP_TEMP_LIMIT,
 
-       /* In-chip Port Based VLANs.
-        * Each port VLANTable register (see PORT_BASE_VLAN) is used to restrict
-        * the output (or egress) ports to which it is allowed to send frames.
-        */
-       MV88E6XXX_CAP_VLANTABLE,
-
        /* VLAN Table Unit.
         * The VTU is used to program 802.1Q VLANs. See GLOBAL_VTU_OP.
         */
@@ -443,90 +440,130 @@ enum mv88e6xxx_cap {
 };
 
 /* Bitmask of capabilities */
-#define MV88E6XXX_FLAG_ATU             BIT(MV88E6XXX_CAP_ATU)
 #define MV88E6XXX_FLAG_EEE             BIT(MV88E6XXX_CAP_EEE)
-#define MV88E6XXX_FLAG_EEPROM          BIT(MV88E6XXX_CAP_EEPROM)
+#define MV88E6XXX_FLAG_GLOBAL2         BIT(MV88E6XXX_CAP_GLOBAL2)
+#define MV88E6XXX_FLAG_G2_MGMT_EN_2X   BIT(MV88E6XXX_CAP_G2_MGMT_EN_2X)
+#define MV88E6XXX_FLAG_G2_MGMT_EN_0X   BIT(MV88E6XXX_CAP_G2_MGMT_EN_0X)
+#define MV88E6XXX_FLAG_G2_IRL_CMD      BIT(MV88E6XXX_CAP_G2_IRL_CMD)
+#define MV88E6XXX_FLAG_G2_IRL_DATA     BIT(MV88E6XXX_CAP_G2_IRL_DATA)
+#define MV88E6XXX_FLAG_G2_PVT_ADDR     BIT(MV88E6XXX_CAP_G2_PVT_ADDR)
+#define MV88E6XXX_FLAG_G2_PVT_DATA     BIT(MV88E6XXX_CAP_G2_PVT_DATA)
+#define MV88E6XXX_FLAG_G2_SWITCH_MAC   BIT(MV88E6XXX_CAP_G2_SWITCH_MAC)
+#define MV88E6XXX_FLAG_G2_POT          BIT(MV88E6XXX_CAP_G2_POT)
+#define MV88E6XXX_FLAG_G2_EEPROM_CMD   BIT(MV88E6XXX_CAP_G2_EEPROM_CMD)
+#define MV88E6XXX_FLAG_G2_EEPROM_DATA  BIT(MV88E6XXX_CAP_G2_EEPROM_DATA)
 #define MV88E6XXX_FLAG_MULTI_CHIP      BIT(MV88E6XXX_CAP_MULTI_CHIP)
-#define MV88E6XXX_FLAG_PORTSTATE       BIT(MV88E6XXX_CAP_PORTSTATE)
 #define MV88E6XXX_FLAG_PPU             BIT(MV88E6XXX_CAP_PPU)
 #define MV88E6XXX_FLAG_PPU_ACTIVE      BIT(MV88E6XXX_CAP_PPU_ACTIVE)
 #define MV88E6XXX_FLAG_SMI_PHY         BIT(MV88E6XXX_CAP_SMI_PHY)
 #define MV88E6XXX_FLAG_STU             BIT(MV88E6XXX_CAP_STU)
-#define MV88E6XXX_FLAG_SWITCH_MAC      BIT(MV88E6XXX_CAP_SWITCH_MAC_WOL_WOF)
 #define MV88E6XXX_FLAG_TEMP            BIT(MV88E6XXX_CAP_TEMP)
 #define MV88E6XXX_FLAG_TEMP_LIMIT      BIT(MV88E6XXX_CAP_TEMP_LIMIT)
-#define MV88E6XXX_FLAG_VLANTABLE       BIT(MV88E6XXX_CAP_VLANTABLE)
 #define MV88E6XXX_FLAG_VTU             BIT(MV88E6XXX_CAP_VTU)
 
+/* EEPROM Programming via Global2 with 16-bit data */
+#define MV88E6XXX_FLAGS_EEPROM16       \
+       (MV88E6XXX_FLAG_G2_EEPROM_CMD | \
+        MV88E6XXX_FLAG_G2_EEPROM_DATA)
+
+/* Ingress Rate Limit unit */
+#define MV88E6XXX_FLAGS_IRL            \
+       (MV88E6XXX_FLAG_G2_IRL_CMD |    \
+        MV88E6XXX_FLAG_G2_IRL_DATA)
+
+/* Cross-chip Port VLAN Table */
+#define MV88E6XXX_FLAGS_PVT            \
+       (MV88E6XXX_FLAG_G2_PVT_ADDR |   \
+        MV88E6XXX_FLAG_G2_PVT_DATA)
+
 #define MV88E6XXX_FLAGS_FAMILY_6095    \
-       (MV88E6XXX_FLAG_ATU |           \
+       (MV88E6XXX_FLAG_GLOBAL2 |       \
+        MV88E6XXX_FLAG_G2_MGMT_EN_0X | \
         MV88E6XXX_FLAG_MULTI_CHIP |    \
         MV88E6XXX_FLAG_PPU |           \
-        MV88E6XXX_FLAG_VLANTABLE |     \
         MV88E6XXX_FLAG_VTU)
 
 #define MV88E6XXX_FLAGS_FAMILY_6097    \
-       (MV88E6XXX_FLAG_ATU |           \
+       (MV88E6XXX_FLAG_GLOBAL2 |       \
+        MV88E6XXX_FLAG_G2_MGMT_EN_2X | \
+        MV88E6XXX_FLAG_G2_MGMT_EN_0X | \
+        MV88E6XXX_FLAG_G2_POT |        \
         MV88E6XXX_FLAG_MULTI_CHIP |    \
         MV88E6XXX_FLAG_PPU |           \
         MV88E6XXX_FLAG_STU |           \
-        MV88E6XXX_FLAG_VLANTABLE |     \
-        MV88E6XXX_FLAG_VTU)
+        MV88E6XXX_FLAG_VTU |           \
+        MV88E6XXX_FLAGS_IRL |          \
+        MV88E6XXX_FLAGS_PVT)
 
 #define MV88E6XXX_FLAGS_FAMILY_6165    \
-       (MV88E6XXX_FLAG_MULTI_CHIP |    \
+       (MV88E6XXX_FLAG_GLOBAL2 |       \
+        MV88E6XXX_FLAG_G2_MGMT_EN_2X | \
+        MV88E6XXX_FLAG_G2_MGMT_EN_0X | \
+        MV88E6XXX_FLAG_G2_SWITCH_MAC | \
+        MV88E6XXX_FLAG_G2_POT |        \
+        MV88E6XXX_FLAG_MULTI_CHIP |    \
         MV88E6XXX_FLAG_STU |           \
-        MV88E6XXX_FLAG_SWITCH_MAC |    \
         MV88E6XXX_FLAG_TEMP |          \
-        MV88E6XXX_FLAG_VTU)
+        MV88E6XXX_FLAG_VTU |           \
+        MV88E6XXX_FLAGS_IRL |          \
+        MV88E6XXX_FLAGS_PVT)
 
 #define MV88E6XXX_FLAGS_FAMILY_6185    \
-       (MV88E6XXX_FLAG_ATU |           \
+       (MV88E6XXX_FLAG_GLOBAL2 |       \
+        MV88E6XXX_FLAG_G2_MGMT_EN_0X | \
         MV88E6XXX_FLAG_MULTI_CHIP |    \
         MV88E6XXX_FLAG_PPU |           \
-        MV88E6XXX_FLAG_VLANTABLE |     \
         MV88E6XXX_FLAG_VTU)
 
 #define MV88E6XXX_FLAGS_FAMILY_6320    \
-       (MV88E6XXX_FLAG_ATU |           \
-        MV88E6XXX_FLAG_EEE |           \
-        MV88E6XXX_FLAG_EEPROM |        \
+       (MV88E6XXX_FLAG_EEE |           \
+        MV88E6XXX_FLAG_GLOBAL2 |       \
+        MV88E6XXX_FLAG_G2_MGMT_EN_2X | \
+        MV88E6XXX_FLAG_G2_MGMT_EN_0X | \
+        MV88E6XXX_FLAG_G2_SWITCH_MAC | \
+        MV88E6XXX_FLAG_G2_POT |        \
         MV88E6XXX_FLAG_MULTI_CHIP |    \
-        MV88E6XXX_FLAG_PORTSTATE |     \
         MV88E6XXX_FLAG_PPU_ACTIVE |    \
         MV88E6XXX_FLAG_SMI_PHY |       \
-        MV88E6XXX_FLAG_SWITCH_MAC |    \
         MV88E6XXX_FLAG_TEMP |          \
         MV88E6XXX_FLAG_TEMP_LIMIT |    \
-        MV88E6XXX_FLAG_VLANTABLE |     \
-        MV88E6XXX_FLAG_VTU)
+        MV88E6XXX_FLAG_VTU |           \
+        MV88E6XXX_FLAGS_EEPROM16 |     \
+        MV88E6XXX_FLAGS_IRL |          \
+        MV88E6XXX_FLAGS_PVT)
 
 #define MV88E6XXX_FLAGS_FAMILY_6351    \
-       (MV88E6XXX_FLAG_ATU |           \
+       (MV88E6XXX_FLAG_GLOBAL2 |       \
+        MV88E6XXX_FLAG_G2_MGMT_EN_2X | \
+        MV88E6XXX_FLAG_G2_MGMT_EN_0X | \
+        MV88E6XXX_FLAG_G2_SWITCH_MAC | \
+        MV88E6XXX_FLAG_G2_POT |        \
         MV88E6XXX_FLAG_MULTI_CHIP |    \
-        MV88E6XXX_FLAG_PORTSTATE |     \
         MV88E6XXX_FLAG_PPU_ACTIVE |    \
         MV88E6XXX_FLAG_SMI_PHY |       \
         MV88E6XXX_FLAG_STU |           \
-        MV88E6XXX_FLAG_SWITCH_MAC |    \
         MV88E6XXX_FLAG_TEMP |          \
-        MV88E6XXX_FLAG_VLANTABLE |     \
-        MV88E6XXX_FLAG_VTU)
+        MV88E6XXX_FLAG_VTU |           \
+        MV88E6XXX_FLAGS_IRL |          \
+        MV88E6XXX_FLAGS_PVT)
 
 #define MV88E6XXX_FLAGS_FAMILY_6352    \
-       (MV88E6XXX_FLAG_ATU |           \
-        MV88E6XXX_FLAG_EEE |           \
-        MV88E6XXX_FLAG_EEPROM |        \
+       (MV88E6XXX_FLAG_EEE |           \
+        MV88E6XXX_FLAG_GLOBAL2 |       \
+        MV88E6XXX_FLAG_G2_MGMT_EN_2X | \
+        MV88E6XXX_FLAG_G2_MGMT_EN_0X | \
+        MV88E6XXX_FLAG_G2_SWITCH_MAC | \
+        MV88E6XXX_FLAG_G2_POT |        \
         MV88E6XXX_FLAG_MULTI_CHIP |    \
-        MV88E6XXX_FLAG_PORTSTATE |     \
         MV88E6XXX_FLAG_PPU_ACTIVE |    \
         MV88E6XXX_FLAG_SMI_PHY |       \
         MV88E6XXX_FLAG_STU |           \
-        MV88E6XXX_FLAG_SWITCH_MAC |    \
         MV88E6XXX_FLAG_TEMP |          \
         MV88E6XXX_FLAG_TEMP_LIMIT |    \
-        MV88E6XXX_FLAG_VLANTABLE |     \
-        MV88E6XXX_FLAG_VTU)
+        MV88E6XXX_FLAG_VTU |           \
+        MV88E6XXX_FLAGS_EEPROM16 |     \
+        MV88E6XXX_FLAGS_IRL |          \
+        MV88E6XXX_FLAGS_PVT)
 
 struct mv88e6xxx_info {
        enum mv88e6xxx_family family;
@@ -535,6 +572,7 @@ struct mv88e6xxx_info {
        unsigned int num_databases;
        unsigned int num_ports;
        unsigned int port_base_addr;
+       unsigned int age_time_coeff;
        unsigned long flags;
 };
 
@@ -595,17 +633,6 @@ struct mv88e6xxx_chip {
         */
        struct mutex    stats_mutex;
 
-       /* This mutex serializes phy access for chips with
-        * indirect phy addressing. It is unused for chips
-        * with direct phy access.
-        */
-       struct mutex    phy_mutex;
-
-       /* This mutex serializes eeprom access for chips with
-        * eeprom support.
-        */
-       struct mutex eeprom_mutex;
-
        struct mv88e6xxx_priv_port      ports[DSA_MAX_PORTS];
 
        /* A switch may have a GPIO line tied to its reset pin. Parse
index 3d2245f..38eaea1 100644 (file)
@@ -310,7 +310,7 @@ static int bfin_mdiobus_write(struct mii_bus *bus, int phy_addr, int regnum,
 static void bfin_mac_adjust_link(struct net_device *dev)
 {
        struct bfin_mac_local *lp = netdev_priv(dev);
-       struct phy_device *phydev = lp->phydev;
+       struct phy_device *phydev = dev->phydev;
        unsigned long flags;
        int new_state = 0;
 
@@ -430,7 +430,6 @@ static int mii_probe(struct net_device *dev, int phy_mode)
        lp->old_link = 0;
        lp->old_speed = 0;
        lp->old_duplex = -1;
-       lp->phydev = phydev;
 
        phy_attached_print(phydev, "mdc_clk=%dHz(mdc_div=%d)@sclk=%dMHz)\n",
                           MDC_CLK, mdc_div, sclk / 1000000);
@@ -450,31 +449,6 @@ static irqreturn_t bfin_mac_wake_interrupt(int irq, void *dev_id)
        return IRQ_HANDLED;
 }
 
-static int
-bfin_mac_ethtool_getsettings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct bfin_mac_local *lp = netdev_priv(dev);
-
-       if (lp->phydev)
-               return phy_ethtool_gset(lp->phydev, cmd);
-
-       return -EINVAL;
-}
-
-static int
-bfin_mac_ethtool_setsettings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct bfin_mac_local *lp = netdev_priv(dev);
-
-       if (!capable(CAP_NET_ADMIN))
-               return -EPERM;
-
-       if (lp->phydev)
-               return phy_ethtool_sset(lp->phydev, cmd);
-
-       return -EINVAL;
-}
-
 static void bfin_mac_ethtool_getdrvinfo(struct net_device *dev,
                                        struct ethtool_drvinfo *info)
 {
@@ -552,8 +526,6 @@ static int bfin_mac_ethtool_get_ts_info(struct net_device *dev,
 #endif
 
 static const struct ethtool_ops bfin_mac_ethtool_ops = {
-       .get_settings = bfin_mac_ethtool_getsettings,
-       .set_settings = bfin_mac_ethtool_setsettings,
        .get_link = ethtool_op_get_link,
        .get_drvinfo = bfin_mac_ethtool_getdrvinfo,
        .get_wol = bfin_mac_ethtool_getwol,
@@ -561,6 +533,8 @@ static const struct ethtool_ops bfin_mac_ethtool_ops = {
 #ifdef CONFIG_BFIN_MAC_USE_HWSTAMP
        .get_ts_info = bfin_mac_ethtool_get_ts_info,
 #endif
+       .get_link_ksettings = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings = phy_ethtool_set_link_ksettings,
 };
 
 /**************************************************************************/
@@ -1427,7 +1401,7 @@ static void bfin_mac_timeout(struct net_device *dev)
        if (netif_queue_stopped(dev))
                netif_wake_queue(dev);
 
-       bfin_mac_enable(lp->phydev);
+       bfin_mac_enable(dev->phydev);
 
        /* We can accept TX packets again */
        netif_trans_update(dev); /* prevent tx timeout */
@@ -1491,8 +1465,6 @@ static void bfin_mac_set_multicast_list(struct net_device *dev)
 
 static int bfin_mac_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
 {
-       struct bfin_mac_local *lp = netdev_priv(netdev);
-
        if (!netif_running(netdev))
                return -EINVAL;
 
@@ -1502,8 +1474,8 @@ static int bfin_mac_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
        case SIOCGHWTSTAMP:
                return bfin_mac_hwtstamp_get(netdev, ifr);
        default:
-               if (lp->phydev)
-                       return phy_mii_ioctl(lp->phydev, ifr, cmd);
+               if (netdev->phydev)
+                       return phy_mii_ioctl(netdev->phydev, ifr, cmd);
                else
                        return -EOPNOTSUPP;
        }
@@ -1547,12 +1519,12 @@ static int bfin_mac_open(struct net_device *dev)
        if (ret)
                return ret;
 
-       phy_start(lp->phydev);
+       phy_start(dev->phydev);
        setup_system_regs(dev);
        setup_mac_addr(dev->dev_addr);
 
        bfin_mac_disable();
-       ret = bfin_mac_enable(lp->phydev);
+       ret = bfin_mac_enable(dev->phydev);
        if (ret)
                return ret;
        pr_debug("hardware init finished\n");
@@ -1578,8 +1550,8 @@ static int bfin_mac_close(struct net_device *dev)
        napi_disable(&lp->napi);
        netif_carrier_off(dev);
 
-       phy_stop(lp->phydev);
-       phy_write(lp->phydev, MII_BMCR, BMCR_PDOWN);
+       phy_stop(dev->phydev);
+       phy_write(dev->phydev, MII_BMCR, BMCR_PDOWN);
 
        /* clear everything */
        bfin_mac_shutdown(dev);
index d1217db..8c3b561 100644 (file)
@@ -92,7 +92,6 @@ struct bfin_mac_local {
        int old_speed;
        int old_duplex;
 
-       struct phy_device *phydev;
        struct mii_bus *mii_bus;
 
 #if defined(CONFIG_BFIN_MAC_USE_HWSTAMP)
index cd7e2e5..c83ebae 100644 (file)
@@ -3827,7 +3827,7 @@ static void et131x_tx_timeout(struct net_device *netdev)
        unsigned long flags;
 
        /* If the device is closed, ignore the timeout */
-       if (~(adapter->flags & FMP_ADAPTER_INTERRUPT_IN_USE))
+       if (!(adapter->flags & FMP_ADAPTER_INTERRUPT_IN_USE))
                return;
 
        /* Any nonrecoverable hardware error?
index 20760e1..df66418 100644 (file)
@@ -412,13 +412,13 @@ static void
 au1000_adjust_link(struct net_device *dev)
 {
        struct au1000_private *aup = netdev_priv(dev);
-       struct phy_device *phydev = aup->phy_dev;
+       struct phy_device *phydev = dev->phydev;
        unsigned long flags;
        u32 reg;
 
        int status_change = 0;
 
-       BUG_ON(!aup->phy_dev);
+       BUG_ON(!phydev);
 
        spin_lock_irqsave(&aup->lock, flags);
 
@@ -579,7 +579,6 @@ static int au1000_mii_probe(struct net_device *dev)
        aup->old_link = 0;
        aup->old_speed = 0;
        aup->old_duplex = -1;
-       aup->phy_dev = phydev;
 
        phy_attached_info(phydev);
 
@@ -678,29 +677,6 @@ au1000_setup_hw_rings(struct au1000_private *aup, void __iomem *tx_base)
  * ethtool operations
  */
 
-static int au1000_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct au1000_private *aup = netdev_priv(dev);
-
-       if (aup->phy_dev)
-               return phy_ethtool_gset(aup->phy_dev, cmd);
-
-       return -EINVAL;
-}
-
-static int au1000_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct au1000_private *aup = netdev_priv(dev);
-
-       if (!capable(CAP_NET_ADMIN))
-               return -EPERM;
-
-       if (aup->phy_dev)
-               return phy_ethtool_sset(aup->phy_dev, cmd);
-
-       return -EINVAL;
-}
-
 static void
 au1000_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
 {
@@ -725,12 +701,12 @@ static u32 au1000_get_msglevel(struct net_device *dev)
 }
 
 static const struct ethtool_ops au1000_ethtool_ops = {
-       .get_settings = au1000_get_settings,
-       .set_settings = au1000_set_settings,
        .get_drvinfo = au1000_get_drvinfo,
        .get_link = ethtool_op_get_link,
        .get_msglevel = au1000_get_msglevel,
        .set_msglevel = au1000_set_msglevel,
+       .get_link_ksettings = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings = phy_ethtool_set_link_ksettings,
 };
 
 
@@ -778,8 +754,8 @@ static int au1000_init(struct net_device *dev)
 #ifndef CONFIG_CPU_LITTLE_ENDIAN
        control |= MAC_BIG_ENDIAN;
 #endif
-       if (aup->phy_dev) {
-               if (aup->phy_dev->link && (DUPLEX_FULL == aup->phy_dev->duplex))
+       if (dev->phydev) {
+               if (dev->phydev->link && (DUPLEX_FULL == dev->phydev->duplex))
                        control |= MAC_FULL_DUPLEX;
                else
                        control |= MAC_DISABLE_RX_OWN;
@@ -891,11 +867,10 @@ static int au1000_rx(struct net_device *dev)
 
 static void au1000_update_tx_stats(struct net_device *dev, u32 status)
 {
-       struct au1000_private *aup = netdev_priv(dev);
        struct net_device_stats *ps = &dev->stats;
 
        if (status & TX_FRAME_ABORTED) {
-               if (!aup->phy_dev || (DUPLEX_FULL == aup->phy_dev->duplex)) {
+               if (!dev->phydev || (DUPLEX_FULL == dev->phydev->duplex)) {
                        if (status & (TX_JAB_TIMEOUT | TX_UNDERRUN)) {
                                /* any other tx errors are only valid
                                 * in half duplex mode
@@ -975,10 +950,10 @@ static int au1000_open(struct net_device *dev)
                return retval;
        }
 
-       if (aup->phy_dev) {
+       if (dev->phydev) {
                /* cause the PHY state machine to schedule a link state check */
-               aup->phy_dev->state = PHY_CHANGELINK;
-               phy_start(aup->phy_dev);
+               dev->phydev->state = PHY_CHANGELINK;
+               phy_start(dev->phydev);
        }
 
        netif_start_queue(dev);
@@ -995,8 +970,8 @@ static int au1000_close(struct net_device *dev)
 
        netif_dbg(aup, drv, dev, "close: dev=%p\n", dev);
 
-       if (aup->phy_dev)
-               phy_stop(aup->phy_dev);
+       if (dev->phydev)
+               phy_stop(dev->phydev);
 
        spin_lock_irqsave(&aup->lock, flags);
 
@@ -1110,15 +1085,13 @@ static void au1000_multicast_list(struct net_device *dev)
 
 static int au1000_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
 {
-       struct au1000_private *aup = netdev_priv(dev);
-
        if (!netif_running(dev))
                return -EINVAL;
 
-       if (!aup->phy_dev)
+       if (!dev->phydev)
                return -EINVAL; /* PHY not controllable */
 
-       return phy_mii_ioctl(aup->phy_dev, rq, cmd);
+       return phy_mii_ioctl(dev->phydev, rq, cmd);
 }
 
 static const struct net_device_ops au1000_netdev_ops = {
index ca53024..4c47c23 100644 (file)
@@ -106,7 +106,6 @@ struct au1000_private {
        int old_speed;
        int old_duplex;
 
-       struct phy_device *phy_dev;
        struct mii_bus *mii_bus;
 
        /* PHY configuration */
index dc2c35d..0d4ea92 100644 (file)
@@ -259,6 +259,7 @@ static void nb8800_receive(struct net_device *dev, unsigned int i,
                if (err) {
                        netdev_err(dev, "rx buffer allocation failed\n");
                        dev->stats.rx_dropped++;
+                       dev_kfree_skb(skb);
                        return;
                }
 
@@ -1418,7 +1419,7 @@ static int nb8800_probe(struct platform_device *pdev)
        if (ops && ops->reset) {
                ret = ops->reset(dev);
                if (ret)
-                       goto err_free_dev;
+                       goto err_disable_clk;
        }
 
        bus = devm_mdiobus_alloc(&pdev->dev);
index d74a92e..bd8c80c 100644 (file)
@@ -140,10 +140,18 @@ config BNX2X_SRIOV
          allows for virtual function acceleration in virtual environments.
 
 config BGMAC
-       tristate "BCMA bus GBit core support"
+       tristate
+       help
+         This enables the integrated ethernet controller support for many
+         Broadcom (mostly iProc) SoCs. An appropriate bus interface driver
+         needs to be enabled to select this.
+
+config BGMAC_BCMA
+       tristate "Broadcom iProc GBit BCMA support"
        depends on BCMA && BCMA_HOST_SOC
        depends on HAS_DMA
        depends on BCM47XX || ARCH_BCM_5301X || COMPILE_TEST
+       select BGMAC
        select PHYLIB
        select FIXED_PHY
        ---help---
@@ -152,6 +160,19 @@ config BGMAC
          In case of using this driver on BCM4706 it's also requires to enable
          BCMA_DRIVER_GMAC_CMN to make it work.
 
+config BGMAC_PLATFORM
+       tristate "Broadcom iProc GBit platform support"
+       depends on HAS_DMA
+       depends on ARCH_BCM_IPROC || COMPILE_TEST
+       depends on OF
+       select BGMAC
+       select PHYLIB
+       select FIXED_PHY
+       default ARCH_BCM_IPROC
+       ---help---
+         Say Y here if you want to use the Broadcom iProc Gigabit Ethernet
+         controller through the generic platform interface
+
 config SYSTEMPORT
        tristate "Broadcom SYSTEMPORT internal MAC support"
        depends on OF
index 00584d7..79f2372 100644 (file)
@@ -11,5 +11,7 @@ obj-$(CONFIG_BNX2X) += bnx2x/
 obj-$(CONFIG_SB1250_MAC) += sb1250-mac.o
 obj-$(CONFIG_TIGON3) += tg3.o
 obj-$(CONFIG_BGMAC) += bgmac.o
+obj-$(CONFIG_BGMAC_BCMA) += bgmac-bcma.o bgmac-bcma-mdio.o
+obj-$(CONFIG_BGMAC_PLATFORM) += bgmac-platform.o
 obj-$(CONFIG_SYSTEMPORT) += bcmsysport.o
 obj-$(CONFIG_BNXT) += bnxt/
diff --git a/drivers/net/ethernet/broadcom/bgmac-bcma-mdio.c b/drivers/net/ethernet/broadcom/bgmac-bcma-mdio.c
new file mode 100644 (file)
index 0000000..7c19c8e
--- /dev/null
@@ -0,0 +1,266 @@
+/*
+ * Driver for (BCM4706)? GBit MAC core on BCMA bus.
+ *
+ * Copyright (C) 2012 RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
+ *
+ * Licensed under the GNU/GPL. See COPYING for details.
+ */
+
+#define pr_fmt(fmt)            KBUILD_MODNAME ": " fmt
+
+#include <linux/bcma/bcma.h>
+#include <linux/brcmphy.h>
+#include "bgmac.h"
+
+struct bcma_mdio {
+       struct bcma_device *core;
+       u8 phyaddr;
+};
+
+static bool bcma_mdio_wait_value(struct bcma_device *core, u16 reg, u32 mask,
+                                u32 value, int timeout)
+{
+       u32 val;
+       int i;
+
+       for (i = 0; i < timeout / 10; i++) {
+               val = bcma_read32(core, reg);
+               if ((val & mask) == value)
+                       return true;
+               udelay(10);
+       }
+       dev_err(&core->dev, "Timeout waiting for reg 0x%X\n", reg);
+       return false;
+}
+
+/**************************************************
+ * PHY ops
+ **************************************************/
+
+static u16 bcma_mdio_phy_read(struct bcma_mdio *bcma_mdio, u8 phyaddr, u8 reg)
+{
+       struct bcma_device *core;
+       u16 phy_access_addr;
+       u16 phy_ctl_addr;
+       u32 tmp;
+
+       BUILD_BUG_ON(BGMAC_PA_DATA_MASK != BCMA_GMAC_CMN_PA_DATA_MASK);
+       BUILD_BUG_ON(BGMAC_PA_ADDR_MASK != BCMA_GMAC_CMN_PA_ADDR_MASK);
+       BUILD_BUG_ON(BGMAC_PA_ADDR_SHIFT != BCMA_GMAC_CMN_PA_ADDR_SHIFT);
+       BUILD_BUG_ON(BGMAC_PA_REG_MASK != BCMA_GMAC_CMN_PA_REG_MASK);
+       BUILD_BUG_ON(BGMAC_PA_REG_SHIFT != BCMA_GMAC_CMN_PA_REG_SHIFT);
+       BUILD_BUG_ON(BGMAC_PA_WRITE != BCMA_GMAC_CMN_PA_WRITE);
+       BUILD_BUG_ON(BGMAC_PA_START != BCMA_GMAC_CMN_PA_START);
+       BUILD_BUG_ON(BGMAC_PC_EPA_MASK != BCMA_GMAC_CMN_PC_EPA_MASK);
+       BUILD_BUG_ON(BGMAC_PC_MCT_MASK != BCMA_GMAC_CMN_PC_MCT_MASK);
+       BUILD_BUG_ON(BGMAC_PC_MCT_SHIFT != BCMA_GMAC_CMN_PC_MCT_SHIFT);
+       BUILD_BUG_ON(BGMAC_PC_MTE != BCMA_GMAC_CMN_PC_MTE);
+
+       if (bcma_mdio->core->id.id == BCMA_CORE_4706_MAC_GBIT) {
+               core = bcma_mdio->core->bus->drv_gmac_cmn.core;
+               phy_access_addr = BCMA_GMAC_CMN_PHY_ACCESS;
+               phy_ctl_addr = BCMA_GMAC_CMN_PHY_CTL;
+       } else {
+               core = bcma_mdio->core;
+               phy_access_addr = BGMAC_PHY_ACCESS;
+               phy_ctl_addr = BGMAC_PHY_CNTL;
+       }
+
+       tmp = bcma_read32(core, phy_ctl_addr);
+       tmp &= ~BGMAC_PC_EPA_MASK;
+       tmp |= phyaddr;
+       bcma_write32(core, phy_ctl_addr, tmp);
+
+       tmp = BGMAC_PA_START;
+       tmp |= phyaddr << BGMAC_PA_ADDR_SHIFT;
+       tmp |= reg << BGMAC_PA_REG_SHIFT;
+       bcma_write32(core, phy_access_addr, tmp);
+
+       if (!bcma_mdio_wait_value(core, phy_access_addr, BGMAC_PA_START, 0,
+                                 1000)) {
+               dev_err(&core->dev, "Reading PHY %d register 0x%X failed\n",
+                       phyaddr, reg);
+               return 0xffff;
+       }
+
+       return bcma_read32(core, phy_access_addr) & BGMAC_PA_DATA_MASK;
+}
+
+/* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/chipphywr */
+static int bcma_mdio_phy_write(struct bcma_mdio *bcma_mdio, u8 phyaddr, u8 reg,
+                              u16 value)
+{
+       struct bcma_device *core;
+       u16 phy_access_addr;
+       u16 phy_ctl_addr;
+       u32 tmp;
+
+       if (bcma_mdio->core->id.id == BCMA_CORE_4706_MAC_GBIT) {
+               core = bcma_mdio->core->bus->drv_gmac_cmn.core;
+               phy_access_addr = BCMA_GMAC_CMN_PHY_ACCESS;
+               phy_ctl_addr = BCMA_GMAC_CMN_PHY_CTL;
+       } else {
+               core = bcma_mdio->core;
+               phy_access_addr = BGMAC_PHY_ACCESS;
+               phy_ctl_addr = BGMAC_PHY_CNTL;
+       }
+
+       tmp = bcma_read32(core, phy_ctl_addr);
+       tmp &= ~BGMAC_PC_EPA_MASK;
+       tmp |= phyaddr;
+       bcma_write32(core, phy_ctl_addr, tmp);
+
+       bcma_write32(bcma_mdio->core, BGMAC_INT_STATUS, BGMAC_IS_MDIO);
+       if (bcma_read32(bcma_mdio->core, BGMAC_INT_STATUS) & BGMAC_IS_MDIO)
+               dev_warn(&core->dev, "Error setting MDIO int\n");
+
+       tmp = BGMAC_PA_START;
+       tmp |= BGMAC_PA_WRITE;
+       tmp |= phyaddr << BGMAC_PA_ADDR_SHIFT;
+       tmp |= reg << BGMAC_PA_REG_SHIFT;
+       tmp |= value;
+       bcma_write32(core, phy_access_addr, tmp);
+
+       if (!bcma_mdio_wait_value(core, phy_access_addr, BGMAC_PA_START, 0,
+                                 1000)) {
+               dev_err(&core->dev, "Writing to PHY %d register 0x%X failed\n",
+                       phyaddr, reg);
+               return -ETIMEDOUT;
+       }
+
+       return 0;
+}
+
+/* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/chipphyinit */
+static void bcma_mdio_phy_init(struct bcma_mdio *bcma_mdio)
+{
+       struct bcma_chipinfo *ci = &bcma_mdio->core->bus->chipinfo;
+       u8 i;
+
+       if (ci->id == BCMA_CHIP_ID_BCM5356) {
+               for (i = 0; i < 5; i++) {
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x1f, 0x008b);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x15, 0x0100);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x1f, 0x000f);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x12, 0x2aaa);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x1f, 0x000b);
+               }
+       }
+       if ((ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg != 10) ||
+           (ci->id == BCMA_CHIP_ID_BCM4749 && ci->pkg != 10) ||
+           (ci->id == BCMA_CHIP_ID_BCM53572 && ci->pkg != 9)) {
+               struct bcma_drv_cc *cc = &bcma_mdio->core->bus->drv_cc;
+
+               bcma_chipco_chipctl_maskset(cc, 2, ~0xc0000000, 0);
+               bcma_chipco_chipctl_maskset(cc, 4, ~0x80000000, 0);
+               for (i = 0; i < 5; i++) {
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x1f, 0x000f);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x16, 0x5284);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x1f, 0x000b);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x17, 0x0010);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x1f, 0x000f);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x16, 0x5296);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x17, 0x1073);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x17, 0x9073);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x16, 0x52b6);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x17, 0x9273);
+                       bcma_mdio_phy_write(bcma_mdio, i, 0x1f, 0x000b);
+               }
+       }
+}
+
+/* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/chipphyreset */
+static int bcma_mdio_phy_reset(struct mii_bus *bus)
+{
+       struct bcma_mdio *bcma_mdio = bus->priv;
+       u8 phyaddr = bcma_mdio->phyaddr;
+
+       if (bcma_mdio->phyaddr == BGMAC_PHY_NOREGS)
+               return 0;
+
+       bcma_mdio_phy_write(bcma_mdio, phyaddr, MII_BMCR, BMCR_RESET);
+       udelay(100);
+       if (bcma_mdio_phy_read(bcma_mdio, phyaddr, MII_BMCR) & BMCR_RESET)
+               dev_err(&bcma_mdio->core->dev, "PHY reset failed\n");
+       bcma_mdio_phy_init(bcma_mdio);
+
+       return 0;
+}
+
+/**************************************************
+ * MII
+ **************************************************/
+
+static int bcma_mdio_mii_read(struct mii_bus *bus, int mii_id, int regnum)
+{
+       return bcma_mdio_phy_read(bus->priv, mii_id, regnum);
+}
+
+static int bcma_mdio_mii_write(struct mii_bus *bus, int mii_id, int regnum,
+                              u16 value)
+{
+       return bcma_mdio_phy_write(bus->priv, mii_id, regnum, value);
+}
+
+struct mii_bus *bcma_mdio_mii_register(struct bcma_device *core, u8 phyaddr)
+{
+       struct bcma_mdio *bcma_mdio;
+       struct mii_bus *mii_bus;
+       int err;
+
+       bcma_mdio = kzalloc(sizeof(*bcma_mdio), GFP_KERNEL);
+       if (!bcma_mdio)
+               return ERR_PTR(-ENOMEM);
+
+       mii_bus = mdiobus_alloc();
+       if (!mii_bus) {
+               err = -ENOMEM;
+               goto err;
+       }
+
+       mii_bus->name = "bcma_mdio mii bus";
+       sprintf(mii_bus->id, "%s-%d-%d", "bcma_mdio", core->bus->num,
+               core->core_unit);
+       mii_bus->priv = bcma_mdio;
+       mii_bus->read = bcma_mdio_mii_read;
+       mii_bus->write = bcma_mdio_mii_write;
+       mii_bus->reset = bcma_mdio_phy_reset;
+       mii_bus->parent = &core->dev;
+       mii_bus->phy_mask = ~(1 << phyaddr);
+
+       bcma_mdio->core = core;
+       bcma_mdio->phyaddr = phyaddr;
+
+       err = mdiobus_register(mii_bus);
+       if (err) {
+               dev_err(&core->dev, "Registration of mii bus failed\n");
+               goto err_free_bus;
+       }
+
+       return mii_bus;
+
+err_free_bus:
+       mdiobus_free(mii_bus);
+err:
+       kfree(bcma_mdio);
+       return ERR_PTR(err);
+}
+EXPORT_SYMBOL_GPL(bcma_mdio_mii_register);
+
+void bcma_mdio_mii_unregister(struct mii_bus *mii_bus)
+{
+       struct bcma_mdio *bcma_mdio;
+
+       if (!mii_bus)
+               return;
+
+       bcma_mdio = mii_bus->priv;
+
+       mdiobus_unregister(mii_bus);
+       mdiobus_free(mii_bus);
+       kfree(bcma_mdio);
+}
+EXPORT_SYMBOL_GPL(bcma_mdio_mii_unregister);
+
+MODULE_AUTHOR("RafaÅ‚ MiÅ‚ecki");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/broadcom/bgmac-bcma.c b/drivers/net/ethernet/broadcom/bgmac-bcma.c
new file mode 100644 (file)
index 0000000..9a9745c
--- /dev/null
@@ -0,0 +1,315 @@
+/*
+ * Driver for (BCM4706)? GBit MAC core on BCMA bus.
+ *
+ * Copyright (C) 2012 RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
+ *
+ * Licensed under the GNU/GPL. See COPYING for details.
+ */
+
+#define pr_fmt(fmt)            KBUILD_MODNAME ": " fmt
+
+#include <linux/bcma/bcma.h>
+#include <linux/brcmphy.h>
+#include <linux/etherdevice.h>
+#include "bgmac.h"
+
+static inline bool bgmac_is_bcm4707_family(struct bcma_device *core)
+{
+       switch (core->bus->chipinfo.id) {
+       case BCMA_CHIP_ID_BCM4707:
+       case BCMA_CHIP_ID_BCM47094:
+       case BCMA_CHIP_ID_BCM53018:
+               return true;
+       default:
+               return false;
+       }
+}
+
+/**************************************************
+ * BCMA bus ops
+ **************************************************/
+
+static u32 bcma_bgmac_read(struct bgmac *bgmac, u16 offset)
+{
+       return bcma_read32(bgmac->bcma.core, offset);
+}
+
+static void bcma_bgmac_write(struct bgmac *bgmac, u16 offset, u32 value)
+{
+       bcma_write32(bgmac->bcma.core, offset, value);
+}
+
+static u32 bcma_bgmac_idm_read(struct bgmac *bgmac, u16 offset)
+{
+       return bcma_aread32(bgmac->bcma.core, offset);
+}
+
+static void bcma_bgmac_idm_write(struct bgmac *bgmac, u16 offset, u32 value)
+{
+       return bcma_awrite32(bgmac->bcma.core, offset, value);
+}
+
+static bool bcma_bgmac_clk_enabled(struct bgmac *bgmac)
+{
+       return bcma_core_is_enabled(bgmac->bcma.core);
+}
+
+static void bcma_bgmac_clk_enable(struct bgmac *bgmac, u32 flags)
+{
+       bcma_core_enable(bgmac->bcma.core, flags);
+}
+
+static void bcma_bgmac_cco_ctl_maskset(struct bgmac *bgmac, u32 offset,
+                                      u32 mask, u32 set)
+{
+       struct bcma_drv_cc *cc = &bgmac->bcma.core->bus->drv_cc;
+
+       bcma_chipco_chipctl_maskset(cc, offset, mask, set);
+}
+
+static u32 bcma_bgmac_get_bus_clock(struct bgmac *bgmac)
+{
+       struct bcma_drv_cc *cc = &bgmac->bcma.core->bus->drv_cc;
+
+       return bcma_pmu_get_bus_clock(cc);
+}
+
+static void bcma_bgmac_cmn_maskset32(struct bgmac *bgmac, u16 offset, u32 mask,
+                                    u32 set)
+{
+       bcma_maskset32(bgmac->bcma.cmn, offset, mask, set);
+}
+
+static const struct bcma_device_id bgmac_bcma_tbl[] = {
+       BCMA_CORE(BCMA_MANUF_BCM, BCMA_CORE_4706_MAC_GBIT,
+                 BCMA_ANY_REV, BCMA_ANY_CLASS),
+       BCMA_CORE(BCMA_MANUF_BCM, BCMA_CORE_MAC_GBIT, BCMA_ANY_REV,
+                 BCMA_ANY_CLASS),
+       {},
+};
+MODULE_DEVICE_TABLE(bcma, bgmac_bcma_tbl);
+
+/* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/chipattach */
+static int bgmac_probe(struct bcma_device *core)
+{
+       struct ssb_sprom *sprom = &core->bus->sprom;
+       struct mii_bus *mii_bus;
+       struct bgmac *bgmac;
+       u8 *mac;
+       int err;
+
+       bgmac = kzalloc(sizeof(*bgmac), GFP_KERNEL);
+       if (!bgmac)
+               return -ENOMEM;
+
+       bgmac->bcma.core = core;
+       bgmac->dev = &core->dev;
+       bgmac->dma_dev = core->dma_dev;
+       bgmac->irq = core->irq;
+
+       bcma_set_drvdata(core, bgmac);
+
+       switch (core->core_unit) {
+       case 0:
+               mac = sprom->et0mac;
+               break;
+       case 1:
+               mac = sprom->et1mac;
+               break;
+       case 2:
+               mac = sprom->et2mac;
+               break;
+       default:
+               dev_err(bgmac->dev, "Unsupported core_unit %d\n",
+                       core->core_unit);
+               err = -ENOTSUPP;
+               goto err;
+       }
+
+       ether_addr_copy(bgmac->mac_addr, mac);
+
+       /* On BCM4706 we need common core to access PHY */
+       if (core->id.id == BCMA_CORE_4706_MAC_GBIT &&
+           !core->bus->drv_gmac_cmn.core) {
+               dev_err(bgmac->dev, "GMAC CMN core not found (required for BCM4706)\n");
+               err = -ENODEV;
+               goto err;
+       }
+       bgmac->bcma.cmn = core->bus->drv_gmac_cmn.core;
+
+       switch (core->core_unit) {
+       case 0:
+               bgmac->phyaddr = sprom->et0phyaddr;
+               break;
+       case 1:
+               bgmac->phyaddr = sprom->et1phyaddr;
+               break;
+       case 2:
+               bgmac->phyaddr = sprom->et2phyaddr;
+               break;
+       }
+       bgmac->phyaddr &= BGMAC_PHY_MASK;
+       if (bgmac->phyaddr == BGMAC_PHY_MASK) {
+               dev_err(bgmac->dev, "No PHY found\n");
+               err = -ENODEV;
+               goto err;
+       }
+       dev_info(bgmac->dev, "Found PHY addr: %d%s\n", bgmac->phyaddr,
+                bgmac->phyaddr == BGMAC_PHY_NOREGS ? " (NOREGS)" : "");
+
+       if (!bgmac_is_bcm4707_family(core)) {
+               mii_bus = bcma_mdio_mii_register(core, bgmac->phyaddr);
+               if (!IS_ERR(mii_bus)) {
+                       err = PTR_ERR(mii_bus);
+                       goto err;
+               }
+
+               bgmac->mii_bus = mii_bus;
+       }
+
+       if (core->bus->hosttype == BCMA_HOSTTYPE_PCI) {
+               dev_err(bgmac->dev, "PCI setup not implemented\n");
+               err = -ENOTSUPP;
+               goto err1;
+       }
+
+       bgmac->has_robosw = !!(core->bus->sprom.boardflags_lo &
+                              BGMAC_BFL_ENETROBO);
+       if (bgmac->has_robosw)
+               dev_warn(bgmac->dev, "Support for Roboswitch not implemented\n");
+
+       if (core->bus->sprom.boardflags_lo & BGMAC_BFL_ENETADM)
+               dev_warn(bgmac->dev, "Support for ADMtek ethernet switch not implemented\n");
+
+       /* Feature Flags */
+       switch (core->bus->chipinfo.id) {
+       case BCMA_CHIP_ID_BCM5357:
+               bgmac->feature_flags |= BGMAC_FEAT_SET_RXQ_CLK;
+               bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST;
+               bgmac->feature_flags |= BGMAC_FEAT_FLW_CTRL1;
+               bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_PHY;
+               if (core->bus->chipinfo.pkg == BCMA_PKG_ID_BCM47186) {
+                       bgmac->feature_flags |= BGMAC_FEAT_IOST_ATTACHED;
+                       bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_RGMII;
+               }
+               if (core->bus->chipinfo.pkg == BCMA_PKG_ID_BCM5358)
+                       bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_EPHYRMII;
+               break;
+       case BCMA_CHIP_ID_BCM53572:
+               bgmac->feature_flags |= BGMAC_FEAT_SET_RXQ_CLK;
+               bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST;
+               bgmac->feature_flags |= BGMAC_FEAT_FLW_CTRL1;
+               bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_PHY;
+               if (core->bus->chipinfo.pkg == BCMA_PKG_ID_BCM47188) {
+                       bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_RGMII;
+                       bgmac->feature_flags |= BGMAC_FEAT_IOST_ATTACHED;
+               }
+               break;
+       case BCMA_CHIP_ID_BCM4749:
+               bgmac->feature_flags |= BGMAC_FEAT_SET_RXQ_CLK;
+               bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST;
+               bgmac->feature_flags |= BGMAC_FEAT_FLW_CTRL1;
+               bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_PHY;
+               if (core->bus->chipinfo.pkg == 10) {
+                       bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_RGMII;
+                       bgmac->feature_flags |= BGMAC_FEAT_IOST_ATTACHED;
+               }
+               break;
+       case BCMA_CHIP_ID_BCM4716:
+               bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST;
+               /* fallthrough */
+       case BCMA_CHIP_ID_BCM47162:
+               bgmac->feature_flags |= BGMAC_FEAT_FLW_CTRL2;
+               bgmac->feature_flags |= BGMAC_FEAT_SET_RXQ_CLK;
+               break;
+       /* bcm4707_family */
+       case BCMA_CHIP_ID_BCM4707:
+       case BCMA_CHIP_ID_BCM47094:
+       case BCMA_CHIP_ID_BCM53018:
+               bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST;
+               bgmac->feature_flags |= BGMAC_FEAT_NO_RESET;
+               bgmac->feature_flags |= BGMAC_FEAT_FORCE_SPEED_2500;
+               break;
+       default:
+               bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST;
+               bgmac->feature_flags |= BGMAC_FEAT_SET_RXQ_CLK;
+       }
+
+       if (!bgmac_is_bcm4707_family(core) && core->id.rev > 2)
+               bgmac->feature_flags |= BGMAC_FEAT_MISC_PLL_REQ;
+
+       if (core->id.id == BCMA_CORE_4706_MAC_GBIT) {
+               bgmac->feature_flags |= BGMAC_FEAT_CMN_PHY_CTL;
+               bgmac->feature_flags |= BGMAC_FEAT_NO_CLR_MIB;
+       }
+
+       if (core->id.rev >= 4) {
+               bgmac->feature_flags |= BGMAC_FEAT_CMDCFG_SR_REV4;
+               bgmac->feature_flags |= BGMAC_FEAT_TX_MASK_SETUP;
+               bgmac->feature_flags |= BGMAC_FEAT_RX_MASK_SETUP;
+       }
+
+       bgmac->read = bcma_bgmac_read;
+       bgmac->write = bcma_bgmac_write;
+       bgmac->idm_read = bcma_bgmac_idm_read;
+       bgmac->idm_write = bcma_bgmac_idm_write;
+       bgmac->clk_enabled = bcma_bgmac_clk_enabled;
+       bgmac->clk_enable = bcma_bgmac_clk_enable;
+       bgmac->cco_ctl_maskset = bcma_bgmac_cco_ctl_maskset;
+       bgmac->get_bus_clock = bcma_bgmac_get_bus_clock;
+       bgmac->cmn_maskset32 = bcma_bgmac_cmn_maskset32;
+
+       err = bgmac_enet_probe(bgmac);
+       if (err)
+               goto err1;
+
+       return 0;
+
+err1:
+       bcma_mdio_mii_unregister(bgmac->mii_bus);
+err:
+       kfree(bgmac);
+       bcma_set_drvdata(core, NULL);
+
+       return err;
+}
+
+static void bgmac_remove(struct bcma_device *core)
+{
+       struct bgmac *bgmac = bcma_get_drvdata(core);
+
+       bcma_mdio_mii_unregister(bgmac->mii_bus);
+       bgmac_enet_remove(bgmac);
+       bcma_set_drvdata(core, NULL);
+       kfree(bgmac);
+}
+
+static struct bcma_driver bgmac_bcma_driver = {
+       .name           = KBUILD_MODNAME,
+       .id_table       = bgmac_bcma_tbl,
+       .probe          = bgmac_probe,
+       .remove         = bgmac_remove,
+};
+
+static int __init bgmac_init(void)
+{
+       int err;
+
+       err = bcma_driver_register(&bgmac_bcma_driver);
+       if (err)
+               return err;
+       pr_info("Broadcom 47xx GBit MAC driver loaded\n");
+
+       return 0;
+}
+
+static void __exit bgmac_exit(void)
+{
+       bcma_driver_unregister(&bgmac_bcma_driver);
+}
+
+module_init(bgmac_init)
+module_exit(bgmac_exit)
+
+MODULE_AUTHOR("RafaÅ‚ MiÅ‚ecki");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/broadcom/bgmac-platform.c b/drivers/net/ethernet/broadcom/bgmac-platform.c
new file mode 100644 (file)
index 0000000..be52f27
--- /dev/null
@@ -0,0 +1,185 @@
+/*
+ * Copyright (C) 2016 Broadcom
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#define pr_fmt(fmt)            KBUILD_MODNAME ": " fmt
+
+#include <linux/bcma/bcma.h>
+#include <linux/etherdevice.h>
+#include <linux/of_address.h>
+#include <linux/of_net.h>
+#include "bgmac.h"
+
+static u32 platform_bgmac_read(struct bgmac *bgmac, u16 offset)
+{
+       return readl(bgmac->plat.base + offset);
+}
+
+static void platform_bgmac_write(struct bgmac *bgmac, u16 offset, u32 value)
+{
+       writel(value, bgmac->plat.base + offset);
+}
+
+static u32 platform_bgmac_idm_read(struct bgmac *bgmac, u16 offset)
+{
+       return readl(bgmac->plat.idm_base + offset);
+}
+
+static void platform_bgmac_idm_write(struct bgmac *bgmac, u16 offset, u32 value)
+{
+       return writel(value, bgmac->plat.idm_base + offset);
+}
+
+static bool platform_bgmac_clk_enabled(struct bgmac *bgmac)
+{
+       if ((bgmac_idm_read(bgmac, BCMA_IOCTL) &
+            (BCMA_IOCTL_CLK | BCMA_IOCTL_FGC)) != BCMA_IOCTL_CLK)
+               return false;
+       if (bgmac_idm_read(bgmac, BCMA_RESET_CTL) & BCMA_RESET_CTL_RESET)
+               return false;
+       return true;
+}
+
+static void platform_bgmac_clk_enable(struct bgmac *bgmac, u32 flags)
+{
+       bgmac_idm_write(bgmac, BCMA_IOCTL,
+                       (BCMA_IOCTL_CLK | BCMA_IOCTL_FGC | flags));
+       bgmac_idm_read(bgmac, BCMA_IOCTL);
+
+       bgmac_idm_write(bgmac, BCMA_RESET_CTL, 0);
+       bgmac_idm_read(bgmac, BCMA_RESET_CTL);
+       udelay(1);
+
+       bgmac_idm_write(bgmac, BCMA_IOCTL, (BCMA_IOCTL_CLK | flags));
+       bgmac_idm_read(bgmac, BCMA_IOCTL);
+       udelay(1);
+}
+
+static void platform_bgmac_cco_ctl_maskset(struct bgmac *bgmac, u32 offset,
+                                          u32 mask, u32 set)
+{
+       /* This shouldn't be encountered */
+       WARN_ON(1);
+}
+
+static u32 platform_bgmac_get_bus_clock(struct bgmac *bgmac)
+{
+       /* This shouldn't be encountered */
+       WARN_ON(1);
+
+       return 0;
+}
+
+static void platform_bgmac_cmn_maskset32(struct bgmac *bgmac, u16 offset,
+                                        u32 mask, u32 set)
+{
+       /* This shouldn't be encountered */
+       WARN_ON(1);
+}
+
+static int bgmac_probe(struct platform_device *pdev)
+{
+       struct device_node *np = pdev->dev.of_node;
+       struct bgmac *bgmac;
+       struct resource *regs;
+       const u8 *mac_addr;
+
+       bgmac = devm_kzalloc(&pdev->dev, sizeof(*bgmac), GFP_KERNEL);
+       if (!bgmac)
+               return -ENOMEM;
+
+       platform_set_drvdata(pdev, bgmac);
+
+       /* Set the features of the 4707 family */
+       bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST;
+       bgmac->feature_flags |= BGMAC_FEAT_NO_RESET;
+       bgmac->feature_flags |= BGMAC_FEAT_FORCE_SPEED_2500;
+       bgmac->feature_flags |= BGMAC_FEAT_CMDCFG_SR_REV4;
+       bgmac->feature_flags |= BGMAC_FEAT_TX_MASK_SETUP;
+       bgmac->feature_flags |= BGMAC_FEAT_RX_MASK_SETUP;
+
+       bgmac->dev = &pdev->dev;
+       bgmac->dma_dev = &pdev->dev;
+
+       mac_addr = of_get_mac_address(np);
+       if (mac_addr)
+               ether_addr_copy(bgmac->mac_addr, mac_addr);
+       else
+               dev_warn(&pdev->dev, "MAC address not present in device tree\n");
+
+       bgmac->irq = platform_get_irq(pdev, 0);
+       if (bgmac->irq < 0) {
+               dev_err(&pdev->dev, "Unable to obtain IRQ\n");
+               return bgmac->irq;
+       }
+
+       regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "amac_base");
+       if (!regs) {
+               dev_err(&pdev->dev, "Unable to obtain base resource\n");
+               return -EINVAL;
+       }
+
+       bgmac->plat.base = devm_ioremap_resource(&pdev->dev, regs);
+       if (IS_ERR(bgmac->plat.base))
+               return PTR_ERR(bgmac->plat.base);
+
+       regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "idm_base");
+       if (!regs) {
+               dev_err(&pdev->dev, "Unable to obtain idm resource\n");
+               return -EINVAL;
+       }
+
+       bgmac->plat.idm_base = devm_ioremap_resource(&pdev->dev, regs);
+       if (IS_ERR(bgmac->plat.idm_base))
+               return PTR_ERR(bgmac->plat.idm_base);
+
+       bgmac->read = platform_bgmac_read;
+       bgmac->write = platform_bgmac_write;
+       bgmac->idm_read = platform_bgmac_idm_read;
+       bgmac->idm_write = platform_bgmac_idm_write;
+       bgmac->clk_enabled = platform_bgmac_clk_enabled;
+       bgmac->clk_enable = platform_bgmac_clk_enable;
+       bgmac->cco_ctl_maskset = platform_bgmac_cco_ctl_maskset;
+       bgmac->get_bus_clock = platform_bgmac_get_bus_clock;
+       bgmac->cmn_maskset32 = platform_bgmac_cmn_maskset32;
+
+       return bgmac_enet_probe(bgmac);
+}
+
+static int bgmac_remove(struct platform_device *pdev)
+{
+       struct bgmac *bgmac = platform_get_drvdata(pdev);
+
+       bgmac_enet_remove(bgmac);
+
+       return 0;
+}
+
+static const struct of_device_id bgmac_of_enet_match[] = {
+       {.compatible = "brcm,amac",},
+       {.compatible = "brcm,nsp-amac",},
+       {},
+};
+
+MODULE_DEVICE_TABLE(of, bgmac_of_enet_match);
+
+static struct platform_driver bgmac_enet_driver = {
+       .driver = {
+               .name  = "bgmac-enet",
+               .of_match_table = bgmac_of_enet_match,
+       },
+       .probe = bgmac_probe,
+       .remove = bgmac_remove,
+};
+
+module_platform_driver(bgmac_enet_driver);
+MODULE_LICENSE("GPL");
index b045dc0..c4751ec 100644 (file)
@@ -6,51 +6,27 @@
  * Licensed under the GNU/GPL. See COPYING for details.
  */
 
-#include "bgmac.h"
 
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/delay.h>
+#define pr_fmt(fmt)            KBUILD_MODNAME ": " fmt
+
+#include <linux/bcma/bcma.h>
 #include <linux/etherdevice.h>
-#include <linux/mii.h>
-#include <linux/phy.h>
-#include <linux/phy_fixed.h>
-#include <linux/interrupt.h>
-#include <linux/dma-mapping.h>
 #include <linux/bcm47xx_nvram.h>
+#include "bgmac.h"
 
-static const struct bcma_device_id bgmac_bcma_tbl[] = {
-       BCMA_CORE(BCMA_MANUF_BCM, BCMA_CORE_4706_MAC_GBIT, BCMA_ANY_REV, BCMA_ANY_CLASS),
-       BCMA_CORE(BCMA_MANUF_BCM, BCMA_CORE_MAC_GBIT, BCMA_ANY_REV, BCMA_ANY_CLASS),
-       {},
-};
-MODULE_DEVICE_TABLE(bcma, bgmac_bcma_tbl);
-
-static inline bool bgmac_is_bcm4707_family(struct bgmac *bgmac)
-{
-       switch (bgmac->core->bus->chipinfo.id) {
-       case BCMA_CHIP_ID_BCM4707:
-       case BCMA_CHIP_ID_BCM47094:
-       case BCMA_CHIP_ID_BCM53018:
-               return true;
-       default:
-               return false;
-       }
-}
-
-static bool bgmac_wait_value(struct bcma_device *core, u16 reg, u32 mask,
+static bool bgmac_wait_value(struct bgmac *bgmac, u16 reg, u32 mask,
                             u32 value, int timeout)
 {
        u32 val;
        int i;
 
        for (i = 0; i < timeout / 10; i++) {
-               val = bcma_read32(core, reg);
+               val = bgmac_read(bgmac, reg);
                if ((val & mask) == value)
                        return true;
                udelay(10);
        }
-       pr_err("Timeout waiting for reg 0x%X\n", reg);
+       dev_err(bgmac->dev, "Timeout waiting for reg 0x%X\n", reg);
        return false;
 }
 
@@ -84,22 +60,22 @@ static void bgmac_dma_tx_reset(struct bgmac *bgmac, struct bgmac_dma_ring *ring)
                udelay(10);
        }
        if (i)
-               bgmac_err(bgmac, "Timeout suspending DMA TX ring 0x%X (BGMAC_DMA_TX_STAT: 0x%08X)\n",
-                         ring->mmio_base, val);
+               dev_err(bgmac->dev, "Timeout suspending DMA TX ring 0x%X (BGMAC_DMA_TX_STAT: 0x%08X)\n",
+                       ring->mmio_base, val);
 
        /* Remove SUSPEND bit */
        bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_TX_CTL, 0);
-       if (!bgmac_wait_value(bgmac->core,
+       if (!bgmac_wait_value(bgmac,
                              ring->mmio_base + BGMAC_DMA_TX_STATUS,
                              BGMAC_DMA_TX_STAT, BGMAC_DMA_TX_STAT_DISABLED,
                              10000)) {
-               bgmac_warn(bgmac, "DMA TX ring 0x%X wasn't disabled on time, waiting additional 300us\n",
-                          ring->mmio_base);
+               dev_warn(bgmac->dev, "DMA TX ring 0x%X wasn't disabled on time, waiting additional 300us\n",
+                        ring->mmio_base);
                udelay(300);
                val = bgmac_read(bgmac, ring->mmio_base + BGMAC_DMA_TX_STATUS);
                if ((val & BGMAC_DMA_TX_STAT) != BGMAC_DMA_TX_STAT_DISABLED)
-                       bgmac_err(bgmac, "Reset of DMA TX ring 0x%X failed\n",
-                                 ring->mmio_base);
+                       dev_err(bgmac->dev, "Reset of DMA TX ring 0x%X failed\n",
+                               ring->mmio_base);
        }
 }
 
@@ -109,7 +85,7 @@ static void bgmac_dma_tx_enable(struct bgmac *bgmac,
        u32 ctl;
 
        ctl = bgmac_read(bgmac, ring->mmio_base + BGMAC_DMA_TX_CTL);
-       if (bgmac->core->id.rev >= 4) {
+       if (bgmac->feature_flags & BGMAC_FEAT_TX_MASK_SETUP) {
                ctl &= ~BGMAC_DMA_TX_BL_MASK;
                ctl |= BGMAC_DMA_TX_BL_128 << BGMAC_DMA_TX_BL_SHIFT;
 
@@ -152,7 +128,7 @@ static netdev_tx_t bgmac_dma_tx_add(struct bgmac *bgmac,
                                    struct bgmac_dma_ring *ring,
                                    struct sk_buff *skb)
 {
-       struct device *dma_dev = bgmac->core->dma_dev;
+       struct device *dma_dev = bgmac->dma_dev;
        struct net_device *net_dev = bgmac->net_dev;
        int index = ring->end % BGMAC_TX_RING_SLOTS;
        struct bgmac_slot_info *slot = &ring->slots[index];
@@ -161,7 +137,7 @@ static netdev_tx_t bgmac_dma_tx_add(struct bgmac *bgmac,
        int i;
 
        if (skb->len > BGMAC_DESC_CTL1_LEN) {
-               bgmac_err(bgmac, "Too long skb (%d)\n", skb->len);
+               netdev_err(bgmac->net_dev, "Too long skb (%d)\n", skb->len);
                goto err_drop;
        }
 
@@ -174,7 +150,7 @@ static netdev_tx_t bgmac_dma_tx_add(struct bgmac *bgmac,
         * even when ring->end overflows
         */
        if (ring->end - ring->start + nr_frags + 1 >= BGMAC_TX_RING_SLOTS) {
-               bgmac_err(bgmac, "TX ring is full, queue should be stopped!\n");
+               netdev_err(bgmac->net_dev, "TX ring is full, queue should be stopped!\n");
                netif_stop_queue(net_dev);
                return NETDEV_TX_BUSY;
        }
@@ -231,7 +207,7 @@ err_dma:
        dma_unmap_single(dma_dev, slot->dma_addr, skb_headlen(skb),
                         DMA_TO_DEVICE);
 
-       while (i > 0) {
+       while (i-- > 0) {
                int index = (ring->end + i) % BGMAC_TX_RING_SLOTS;
                struct bgmac_slot_info *slot = &ring->slots[index];
                u32 ctl1 = le32_to_cpu(ring->cpu_base[index].ctl1);
@@ -241,8 +217,8 @@ err_dma:
        }
 
 err_dma_head:
-       bgmac_err(bgmac, "Mapping error of skb on ring 0x%X\n",
-                 ring->mmio_base);
+       netdev_err(bgmac->net_dev, "Mapping error of skb on ring 0x%X\n",
+                  ring->mmio_base);
 
 err_drop:
        dev_kfree_skb(skb);
@@ -254,7 +230,7 @@ err_drop:
 /* Free transmitted packets */
 static void bgmac_dma_tx_free(struct bgmac *bgmac, struct bgmac_dma_ring *ring)
 {
-       struct device *dma_dev = bgmac->core->dma_dev;
+       struct device *dma_dev = bgmac->dma_dev;
        int empty_slot;
        bool freed = false;
        unsigned bytes_compl = 0, pkts_compl = 0;
@@ -317,12 +293,12 @@ static void bgmac_dma_rx_reset(struct bgmac *bgmac, struct bgmac_dma_ring *ring)
                return;
 
        bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_RX_CTL, 0);
-       if (!bgmac_wait_value(bgmac->core,
+       if (!bgmac_wait_value(bgmac,
                              ring->mmio_base + BGMAC_DMA_RX_STATUS,
                              BGMAC_DMA_RX_STAT, BGMAC_DMA_RX_STAT_DISABLED,
                              10000))
-               bgmac_err(bgmac, "Reset of ring 0x%X RX failed\n",
-                         ring->mmio_base);
+               dev_err(bgmac->dev, "Reset of ring 0x%X RX failed\n",
+                       ring->mmio_base);
 }
 
 static void bgmac_dma_rx_enable(struct bgmac *bgmac,
@@ -331,7 +307,7 @@ static void bgmac_dma_rx_enable(struct bgmac *bgmac,
        u32 ctl;
 
        ctl = bgmac_read(bgmac, ring->mmio_base + BGMAC_DMA_RX_CTL);
-       if (bgmac->core->id.rev >= 4) {
+       if (bgmac->feature_flags & BGMAC_FEAT_RX_MASK_SETUP) {
                ctl &= ~BGMAC_DMA_RX_BL_MASK;
                ctl |= BGMAC_DMA_RX_BL_128 << BGMAC_DMA_RX_BL_SHIFT;
 
@@ -352,7 +328,7 @@ static void bgmac_dma_rx_enable(struct bgmac *bgmac,
 static int bgmac_dma_rx_skb_for_slot(struct bgmac *bgmac,
                                     struct bgmac_slot_info *slot)
 {
-       struct device *dma_dev = bgmac->core->dma_dev;
+       struct device *dma_dev = bgmac->dma_dev;
        dma_addr_t dma_addr;
        struct bgmac_rx_header *rx;
        void *buf;
@@ -371,7 +347,7 @@ static int bgmac_dma_rx_skb_for_slot(struct bgmac *bgmac,
        dma_addr = dma_map_single(dma_dev, buf + BGMAC_RX_BUF_OFFSET,
                                  BGMAC_RX_BUF_SIZE, DMA_FROM_DEVICE);
        if (dma_mapping_error(dma_dev, dma_addr)) {
-               bgmac_err(bgmac, "DMA mapping error\n");
+               netdev_err(bgmac->net_dev, "DMA mapping error\n");
                put_page(virt_to_head_page(buf));
                return -ENOMEM;
        }
@@ -441,7 +417,7 @@ static int bgmac_dma_rx_read(struct bgmac *bgmac, struct bgmac_dma_ring *ring,
        end_slot /= sizeof(struct bgmac_dma_desc);
 
        while (ring->start != end_slot) {
-               struct device *dma_dev = bgmac->core->dma_dev;
+               struct device *dma_dev = bgmac->dma_dev;
                struct bgmac_slot_info *slot = &ring->slots[ring->start];
                struct bgmac_rx_header *rx = slot->buf + BGMAC_RX_BUF_OFFSET;
                struct sk_buff *skb;
@@ -466,16 +442,16 @@ static int bgmac_dma_rx_read(struct bgmac *bgmac, struct bgmac_dma_ring *ring,
 
                        /* Check for poison and drop or pass the packet */
                        if (len == 0xdead && flags == 0xbeef) {
-                               bgmac_err(bgmac, "Found poisoned packet at slot %d, DMA issue!\n",
-                                         ring->start);
+                               netdev_err(bgmac->net_dev, "Found poisoned packet at slot %d, DMA issue!\n",
+                                          ring->start);
                                put_page(virt_to_head_page(buf));
                                bgmac->net_dev->stats.rx_errors++;
                                break;
                        }
 
                        if (len > BGMAC_RX_ALLOC_SIZE) {
-                               bgmac_err(bgmac, "Found oversized packet at slot %d, DMA issue!\n",
-                                         ring->start);
+                               netdev_err(bgmac->net_dev, "Found oversized packet at slot %d, DMA issue!\n",
+                                          ring->start);
                                put_page(virt_to_head_page(buf));
                                bgmac->net_dev->stats.rx_length_errors++;
                                bgmac->net_dev->stats.rx_errors++;
@@ -487,7 +463,7 @@ static int bgmac_dma_rx_read(struct bgmac *bgmac, struct bgmac_dma_ring *ring,
 
                        skb = build_skb(buf, BGMAC_RX_ALLOC_SIZE);
                        if (unlikely(!skb)) {
-                               bgmac_err(bgmac, "build_skb failed\n");
+                               netdev_err(bgmac->net_dev, "build_skb failed\n");
                                put_page(virt_to_head_page(buf));
                                bgmac->net_dev->stats.rx_errors++;
                                break;
@@ -544,7 +520,7 @@ static bool bgmac_dma_unaligned(struct bgmac *bgmac,
 static void bgmac_dma_tx_ring_free(struct bgmac *bgmac,
                                   struct bgmac_dma_ring *ring)
 {
-       struct device *dma_dev = bgmac->core->dma_dev;
+       struct device *dma_dev = bgmac->dma_dev;
        struct bgmac_dma_desc *dma_desc = ring->cpu_base;
        struct bgmac_slot_info *slot;
        int i;
@@ -570,7 +546,7 @@ static void bgmac_dma_tx_ring_free(struct bgmac *bgmac,
 static void bgmac_dma_rx_ring_free(struct bgmac *bgmac,
                                   struct bgmac_dma_ring *ring)
 {
-       struct device *dma_dev = bgmac->core->dma_dev;
+       struct device *dma_dev = bgmac->dma_dev;
        struct bgmac_slot_info *slot;
        int i;
 
@@ -591,7 +567,7 @@ static void bgmac_dma_ring_desc_free(struct bgmac *bgmac,
                                     struct bgmac_dma_ring *ring,
                                     int num_slots)
 {
-       struct device *dma_dev = bgmac->core->dma_dev;
+       struct device *dma_dev = bgmac->dma_dev;
        int size;
 
        if (!ring->cpu_base)
@@ -629,7 +605,7 @@ static void bgmac_dma_free(struct bgmac *bgmac)
 
 static int bgmac_dma_alloc(struct bgmac *bgmac)
 {
-       struct device *dma_dev = bgmac->core->dma_dev;
+       struct device *dma_dev = bgmac->dma_dev;
        struct bgmac_dma_ring *ring;
        static const u16 ring_base[] = { BGMAC_DMA_BASE0, BGMAC_DMA_BASE1,
                                         BGMAC_DMA_BASE2, BGMAC_DMA_BASE3, };
@@ -640,8 +616,8 @@ static int bgmac_dma_alloc(struct bgmac *bgmac)
        BUILD_BUG_ON(BGMAC_MAX_TX_RINGS > ARRAY_SIZE(ring_base));
        BUILD_BUG_ON(BGMAC_MAX_RX_RINGS > ARRAY_SIZE(ring_base));
 
-       if (!(bcma_aread32(bgmac->core, BCMA_IOST) & BCMA_IOST_DMA64)) {
-               bgmac_err(bgmac, "Core does not report 64-bit DMA\n");
+       if (!(bgmac_idm_read(bgmac, BCMA_IOST) & BCMA_IOST_DMA64)) {
+               dev_err(bgmac->dev, "Core does not report 64-bit DMA\n");
                return -ENOTSUPP;
        }
 
@@ -655,8 +631,8 @@ static int bgmac_dma_alloc(struct bgmac *bgmac)
                                                     &ring->dma_base,
                                                     GFP_KERNEL);
                if (!ring->cpu_base) {
-                       bgmac_err(bgmac, "Allocation of TX ring 0x%X failed\n",
-                                 ring->mmio_base);
+                       dev_err(bgmac->dev, "Allocation of TX ring 0x%X failed\n",
+                               ring->mmio_base);
                        goto err_dma_free;
                }
 
@@ -680,8 +656,8 @@ static int bgmac_dma_alloc(struct bgmac *bgmac)
                                                     &ring->dma_base,
                                                     GFP_KERNEL);
                if (!ring->cpu_base) {
-                       bgmac_err(bgmac, "Allocation of RX ring 0x%X failed\n",
-                                 ring->mmio_base);
+                       dev_err(bgmac->dev, "Allocation of RX ring 0x%X failed\n",
+                               ring->mmio_base);
                        err = -ENOMEM;
                        goto err_dma_free;
                }
@@ -756,150 +732,6 @@ error:
        return err;
 }
 
-/**************************************************
- * PHY ops
- **************************************************/
-
-static u16 bgmac_phy_read(struct bgmac *bgmac, u8 phyaddr, u8 reg)
-{
-       struct bcma_device *core;
-       u16 phy_access_addr;
-       u16 phy_ctl_addr;
-       u32 tmp;
-
-       BUILD_BUG_ON(BGMAC_PA_DATA_MASK != BCMA_GMAC_CMN_PA_DATA_MASK);
-       BUILD_BUG_ON(BGMAC_PA_ADDR_MASK != BCMA_GMAC_CMN_PA_ADDR_MASK);
-       BUILD_BUG_ON(BGMAC_PA_ADDR_SHIFT != BCMA_GMAC_CMN_PA_ADDR_SHIFT);
-       BUILD_BUG_ON(BGMAC_PA_REG_MASK != BCMA_GMAC_CMN_PA_REG_MASK);
-       BUILD_BUG_ON(BGMAC_PA_REG_SHIFT != BCMA_GMAC_CMN_PA_REG_SHIFT);
-       BUILD_BUG_ON(BGMAC_PA_WRITE != BCMA_GMAC_CMN_PA_WRITE);
-       BUILD_BUG_ON(BGMAC_PA_START != BCMA_GMAC_CMN_PA_START);
-       BUILD_BUG_ON(BGMAC_PC_EPA_MASK != BCMA_GMAC_CMN_PC_EPA_MASK);
-       BUILD_BUG_ON(BGMAC_PC_MCT_MASK != BCMA_GMAC_CMN_PC_MCT_MASK);
-       BUILD_BUG_ON(BGMAC_PC_MCT_SHIFT != BCMA_GMAC_CMN_PC_MCT_SHIFT);
-       BUILD_BUG_ON(BGMAC_PC_MTE != BCMA_GMAC_CMN_PC_MTE);
-
-       if (bgmac->core->id.id == BCMA_CORE_4706_MAC_GBIT) {
-               core = bgmac->core->bus->drv_gmac_cmn.core;
-               phy_access_addr = BCMA_GMAC_CMN_PHY_ACCESS;
-               phy_ctl_addr = BCMA_GMAC_CMN_PHY_CTL;
-       } else {
-               core = bgmac->core;
-               phy_access_addr = BGMAC_PHY_ACCESS;
-               phy_ctl_addr = BGMAC_PHY_CNTL;
-       }
-
-       tmp = bcma_read32(core, phy_ctl_addr);
-       tmp &= ~BGMAC_PC_EPA_MASK;
-       tmp |= phyaddr;
-       bcma_write32(core, phy_ctl_addr, tmp);
-
-       tmp = BGMAC_PA_START;
-       tmp |= phyaddr << BGMAC_PA_ADDR_SHIFT;
-       tmp |= reg << BGMAC_PA_REG_SHIFT;
-       bcma_write32(core, phy_access_addr, tmp);
-
-       if (!bgmac_wait_value(core, phy_access_addr, BGMAC_PA_START, 0, 1000)) {
-               bgmac_err(bgmac, "Reading PHY %d register 0x%X failed\n",
-                         phyaddr, reg);
-               return 0xffff;
-       }
-
-       return bcma_read32(core, phy_access_addr) & BGMAC_PA_DATA_MASK;
-}
-
-/* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/chipphywr */
-static int bgmac_phy_write(struct bgmac *bgmac, u8 phyaddr, u8 reg, u16 value)
-{
-       struct bcma_device *core;
-       u16 phy_access_addr;
-       u16 phy_ctl_addr;
-       u32 tmp;
-
-       if (bgmac->core->id.id == BCMA_CORE_4706_MAC_GBIT) {
-               core = bgmac->core->bus->drv_gmac_cmn.core;
-               phy_access_addr = BCMA_GMAC_CMN_PHY_ACCESS;
-               phy_ctl_addr = BCMA_GMAC_CMN_PHY_CTL;
-       } else {
-               core = bgmac->core;
-               phy_access_addr = BGMAC_PHY_ACCESS;
-               phy_ctl_addr = BGMAC_PHY_CNTL;
-       }
-
-       tmp = bcma_read32(core, phy_ctl_addr);
-       tmp &= ~BGMAC_PC_EPA_MASK;
-       tmp |= phyaddr;
-       bcma_write32(core, phy_ctl_addr, tmp);
-
-       bgmac_write(bgmac, BGMAC_INT_STATUS, BGMAC_IS_MDIO);
-       if (bgmac_read(bgmac, BGMAC_INT_STATUS) & BGMAC_IS_MDIO)
-               bgmac_warn(bgmac, "Error setting MDIO int\n");
-
-       tmp = BGMAC_PA_START;
-       tmp |= BGMAC_PA_WRITE;
-       tmp |= phyaddr << BGMAC_PA_ADDR_SHIFT;
-       tmp |= reg << BGMAC_PA_REG_SHIFT;
-       tmp |= value;
-       bcma_write32(core, phy_access_addr, tmp);
-
-       if (!bgmac_wait_value(core, phy_access_addr, BGMAC_PA_START, 0, 1000)) {
-               bgmac_err(bgmac, "Writing to PHY %d register 0x%X failed\n",
-                         phyaddr, reg);
-               return -ETIMEDOUT;
-       }
-
-       return 0;
-}
-
-/* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/chipphyinit */
-static void bgmac_phy_init(struct bgmac *bgmac)
-{
-       struct bcma_chipinfo *ci = &bgmac->core->bus->chipinfo;
-       struct bcma_drv_cc *cc = &bgmac->core->bus->drv_cc;
-       u8 i;
-
-       if (ci->id == BCMA_CHIP_ID_BCM5356) {
-               for (i = 0; i < 5; i++) {
-                       bgmac_phy_write(bgmac, i, 0x1f, 0x008b);
-                       bgmac_phy_write(bgmac, i, 0x15, 0x0100);
-                       bgmac_phy_write(bgmac, i, 0x1f, 0x000f);
-                       bgmac_phy_write(bgmac, i, 0x12, 0x2aaa);
-                       bgmac_phy_write(bgmac, i, 0x1f, 0x000b);
-               }
-       }
-       if ((ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg != 10) ||
-           (ci->id == BCMA_CHIP_ID_BCM4749 && ci->pkg != 10) ||
-           (ci->id == BCMA_CHIP_ID_BCM53572 && ci->pkg != 9)) {
-               bcma_chipco_chipctl_maskset(cc, 2, ~0xc0000000, 0);
-               bcma_chipco_chipctl_maskset(cc, 4, ~0x80000000, 0);
-               for (i = 0; i < 5; i++) {
-                       bgmac_phy_write(bgmac, i, 0x1f, 0x000f);
-                       bgmac_phy_write(bgmac, i, 0x16, 0x5284);
-                       bgmac_phy_write(bgmac, i, 0x1f, 0x000b);
-                       bgmac_phy_write(bgmac, i, 0x17, 0x0010);
-                       bgmac_phy_write(bgmac, i, 0x1f, 0x000f);
-                       bgmac_phy_write(bgmac, i, 0x16, 0x5296);
-                       bgmac_phy_write(bgmac, i, 0x17, 0x1073);
-                       bgmac_phy_write(bgmac, i, 0x17, 0x9073);
-                       bgmac_phy_write(bgmac, i, 0x16, 0x52b6);
-                       bgmac_phy_write(bgmac, i, 0x17, 0x9273);
-                       bgmac_phy_write(bgmac, i, 0x1f, 0x000b);
-               }
-       }
-}
-
-/* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/chipphyreset */
-static void bgmac_phy_reset(struct bgmac *bgmac)
-{
-       if (bgmac->phyaddr == BGMAC_PHY_NOREGS)
-               return;
-
-       bgmac_phy_write(bgmac, bgmac->phyaddr, MII_BMCR, BMCR_RESET);
-       udelay(100);
-       if (bgmac_phy_read(bgmac, bgmac->phyaddr, MII_BMCR) & BMCR_RESET)
-               bgmac_err(bgmac, "PHY reset failed\n");
-       bgmac_phy_init(bgmac);
-}
 
 /**************************************************
  * Chip ops
@@ -913,14 +745,20 @@ static void bgmac_cmdcfg_maskset(struct bgmac *bgmac, u32 mask, u32 set,
 {
        u32 cmdcfg = bgmac_read(bgmac, BGMAC_CMDCFG);
        u32 new_val = (cmdcfg & mask) | set;
+       u32 cmdcfg_sr;
+
+       if (bgmac->feature_flags & BGMAC_FEAT_CMDCFG_SR_REV4)
+               cmdcfg_sr = BGMAC_CMDCFG_SR_REV4;
+       else
+               cmdcfg_sr = BGMAC_CMDCFG_SR_REV0;
 
-       bgmac_set(bgmac, BGMAC_CMDCFG, BGMAC_CMDCFG_SR(bgmac->core->id.rev));
+       bgmac_set(bgmac, BGMAC_CMDCFG, cmdcfg_sr);
        udelay(2);
 
        if (new_val != cmdcfg || force)
                bgmac_write(bgmac, BGMAC_CMDCFG, new_val);
 
-       bgmac_mask(bgmac, BGMAC_CMDCFG, ~BGMAC_CMDCFG_SR(bgmac->core->id.rev));
+       bgmac_mask(bgmac, BGMAC_CMDCFG, ~cmdcfg_sr);
        udelay(2);
 }
 
@@ -949,7 +787,7 @@ static void bgmac_chip_stats_update(struct bgmac *bgmac)
 {
        int i;
 
-       if (bgmac->core->id.id != BCMA_CORE_4706_MAC_GBIT) {
+       if (!(bgmac->feature_flags & BGMAC_FEAT_NO_CLR_MIB)) {
                for (i = 0; i < BGMAC_NUM_MIB_TX_REGS; i++)
                        bgmac->mib_tx_regs[i] =
                                bgmac_read(bgmac,
@@ -968,7 +806,7 @@ static void bgmac_clear_mib(struct bgmac *bgmac)
 {
        int i;
 
-       if (bgmac->core->id.id == BCMA_CORE_4706_MAC_GBIT)
+       if (bgmac->feature_flags & BGMAC_FEAT_NO_CLR_MIB)
                return;
 
        bgmac_set(bgmac, BGMAC_DEV_CTL, BGMAC_DC_MROR);
@@ -998,7 +836,8 @@ static void bgmac_mac_speed(struct bgmac *bgmac)
                set |= BGMAC_CMDCFG_ES_2500;
                break;
        default:
-               bgmac_err(bgmac, "Unsupported speed: %d\n", bgmac->mac_speed);
+               dev_err(bgmac->dev, "Unsupported speed: %d\n",
+                       bgmac->mac_speed);
        }
 
        if (bgmac->mac_duplex == DUPLEX_HALF)
@@ -1009,17 +848,16 @@ static void bgmac_mac_speed(struct bgmac *bgmac)
 
 static void bgmac_miiconfig(struct bgmac *bgmac)
 {
-       struct bcma_device *core = bgmac->core;
-       u8 imode;
-
-       if (bgmac_is_bcm4707_family(bgmac)) {
-               bcma_awrite32(core, BCMA_IOCTL,
-                             bcma_aread32(core, BCMA_IOCTL) | 0x40 |
-                             BGMAC_BCMA_IOCTL_SW_CLKEN);
+       if (bgmac->feature_flags & BGMAC_FEAT_FORCE_SPEED_2500) {
+               bgmac_idm_write(bgmac, BCMA_IOCTL,
+                               bgmac_idm_read(bgmac, BCMA_IOCTL) | 0x40 |
+                               BGMAC_BCMA_IOCTL_SW_CLKEN);
                bgmac->mac_speed = SPEED_2500;
                bgmac->mac_duplex = DUPLEX_FULL;
                bgmac_mac_speed(bgmac);
        } else {
+               u8 imode;
+
                imode = (bgmac_read(bgmac, BGMAC_DEV_STATUS) &
                        BGMAC_DS_MM_MASK) >> BGMAC_DS_MM_SHIFT;
                if (imode == 0 || imode == 1) {
@@ -1033,14 +871,11 @@ static void bgmac_miiconfig(struct bgmac *bgmac)
 /* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/chipreset */
 static void bgmac_chip_reset(struct bgmac *bgmac)
 {
-       struct bcma_device *core = bgmac->core;
-       struct bcma_bus *bus = core->bus;
-       struct bcma_chipinfo *ci = &bus->chipinfo;
-       u32 flags;
+       u32 cmdcfg_sr;
        u32 iost;
        int i;
 
-       if (bcma_core_is_enabled(core)) {
+       if (bgmac_clk_enabled(bgmac)) {
                if (!bgmac->stats_grabbed) {
                        /* bgmac_chip_stats_update(bgmac); */
                        bgmac->stats_grabbed = true;
@@ -1058,38 +893,32 @@ static void bgmac_chip_reset(struct bgmac *bgmac)
                /* TODO: Clear software multicast filter list */
        }
 
-       iost = bcma_aread32(core, BCMA_IOST);
-       if ((ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM47186) ||
-           (ci->id == BCMA_CHIP_ID_BCM4749 && ci->pkg == 10) ||
-           (ci->id == BCMA_CHIP_ID_BCM53572 && ci->pkg == BCMA_PKG_ID_BCM47188))
+       iost = bgmac_idm_read(bgmac, BCMA_IOST);
+       if (bgmac->feature_flags & BGMAC_FEAT_IOST_ATTACHED)
                iost &= ~BGMAC_BCMA_IOST_ATTACHED;
 
        /* 3GMAC: for BCM4707 & BCM47094, only do core reset at bgmac_probe() */
-       if (ci->id != BCMA_CHIP_ID_BCM4707 &&
-           ci->id != BCMA_CHIP_ID_BCM47094) {
-               flags = 0;
+       if (!(bgmac->feature_flags & BGMAC_FEAT_NO_RESET)) {
+               u32 flags = 0;
                if (iost & BGMAC_BCMA_IOST_ATTACHED) {
                        flags = BGMAC_BCMA_IOCTL_SW_CLKEN;
                        if (!bgmac->has_robosw)
                                flags |= BGMAC_BCMA_IOCTL_SW_RESET;
                }
-               bcma_core_enable(core, flags);
+               bgmac_clk_enable(bgmac, flags);
        }
 
        /* Request Misc PLL for corerev > 2 */
-       if (core->id.rev > 2 && !bgmac_is_bcm4707_family(bgmac)) {
+       if (bgmac->feature_flags & BGMAC_FEAT_MISC_PLL_REQ) {
                bgmac_set(bgmac, BCMA_CLKCTLST,
                          BGMAC_BCMA_CLKCTLST_MISC_PLL_REQ);
-               bgmac_wait_value(bgmac->core, BCMA_CLKCTLST,
+               bgmac_wait_value(bgmac, BCMA_CLKCTLST,
                                 BGMAC_BCMA_CLKCTLST_MISC_PLL_ST,
                                 BGMAC_BCMA_CLKCTLST_MISC_PLL_ST,
                                 1000);
        }
 
-       if (ci->id == BCMA_CHIP_ID_BCM5357 ||
-           ci->id == BCMA_CHIP_ID_BCM4749 ||
-           ci->id == BCMA_CHIP_ID_BCM53572) {
-               struct bcma_drv_cc *cc = &bgmac->core->bus->drv_cc;
+       if (bgmac->feature_flags & BGMAC_FEAT_SW_TYPE_PHY) {
                u8 et_swtype = 0;
                u8 sw_type = BGMAC_CHIPCTL_1_SW_TYPE_EPHY |
                             BGMAC_CHIPCTL_1_IF_TYPE_MII;
@@ -1097,35 +926,37 @@ static void bgmac_chip_reset(struct bgmac *bgmac)
 
                if (bcm47xx_nvram_getenv("et_swtype", buf, sizeof(buf)) > 0) {
                        if (kstrtou8(buf, 0, &et_swtype))
-                               bgmac_err(bgmac, "Failed to parse et_swtype (%s)\n",
-                                         buf);
+                               dev_err(bgmac->dev, "Failed to parse et_swtype (%s)\n",
+                                       buf);
                        et_swtype &= 0x0f;
                        et_swtype <<= 4;
                        sw_type = et_swtype;
-               } else if (ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM5358) {
+               } else if (bgmac->feature_flags & BGMAC_FEAT_SW_TYPE_EPHYRMII) {
                        sw_type = BGMAC_CHIPCTL_1_SW_TYPE_EPHYRMII;
-               } else if ((ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM47186) ||
-                          (ci->id == BCMA_CHIP_ID_BCM4749 && ci->pkg == 10) ||
-                          (ci->id == BCMA_CHIP_ID_BCM53572 && ci->pkg == BCMA_PKG_ID_BCM47188)) {
+               } else if (bgmac->feature_flags & BGMAC_FEAT_SW_TYPE_RGMII) {
                        sw_type = BGMAC_CHIPCTL_1_IF_TYPE_RGMII |
                                  BGMAC_CHIPCTL_1_SW_TYPE_RGMII;
                }
-               bcma_chipco_chipctl_maskset(cc, 1,
-                                           ~(BGMAC_CHIPCTL_1_IF_TYPE_MASK |
-                                             BGMAC_CHIPCTL_1_SW_TYPE_MASK),
-                                           sw_type);
+               bgmac_cco_ctl_maskset(bgmac, 1, ~(BGMAC_CHIPCTL_1_IF_TYPE_MASK |
+                                                 BGMAC_CHIPCTL_1_SW_TYPE_MASK),
+                                     sw_type);
        }
 
        if (iost & BGMAC_BCMA_IOST_ATTACHED && !bgmac->has_robosw)
-               bcma_awrite32(core, BCMA_IOCTL,
-                             bcma_aread32(core, BCMA_IOCTL) &
-                             ~BGMAC_BCMA_IOCTL_SW_RESET);
+               bgmac_idm_write(bgmac, BCMA_IOCTL,
+                               bgmac_idm_read(bgmac, BCMA_IOCTL) &
+                               ~BGMAC_BCMA_IOCTL_SW_RESET);
 
        /* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/gmac_reset
         * Specs don't say about using BGMAC_CMDCFG_SR, but in this routine
         * BGMAC_CMDCFG is read _after_ putting chip in a reset. So it has to
         * be keps until taking MAC out of the reset.
         */
+       if (bgmac->feature_flags & BGMAC_FEAT_CMDCFG_SR_REV4)
+               cmdcfg_sr = BGMAC_CMDCFG_SR_REV4;
+       else
+               cmdcfg_sr = BGMAC_CMDCFG_SR_REV0;
+
        bgmac_cmdcfg_maskset(bgmac,
                             ~(BGMAC_CMDCFG_TE |
                               BGMAC_CMDCFG_RE |
@@ -1143,19 +974,20 @@ static void bgmac_chip_reset(struct bgmac *bgmac)
                             BGMAC_CMDCFG_PROM |
                             BGMAC_CMDCFG_NLC |
                             BGMAC_CMDCFG_CFE |
-                            BGMAC_CMDCFG_SR(core->id.rev),
+                            cmdcfg_sr,
                             false);
        bgmac->mac_speed = SPEED_UNKNOWN;
        bgmac->mac_duplex = DUPLEX_UNKNOWN;
 
        bgmac_clear_mib(bgmac);
-       if (core->id.id == BCMA_CORE_4706_MAC_GBIT)
-               bcma_maskset32(bgmac->cmn, BCMA_GMAC_CMN_PHY_CTL, ~0,
-                              BCMA_GMAC_CMN_PC_MTE);
+       if (bgmac->feature_flags & BGMAC_FEAT_CMN_PHY_CTL)
+               bgmac_cmn_maskset32(bgmac, BCMA_GMAC_CMN_PHY_CTL, ~0,
+                                   BCMA_GMAC_CMN_PC_MTE);
        else
                bgmac_set(bgmac, BGMAC_PHY_CNTL, BGMAC_PC_MTE);
        bgmac_miiconfig(bgmac);
-       bgmac_phy_init(bgmac);
+       if (bgmac->mii_bus)
+               bgmac->mii_bus->reset(bgmac->mii_bus);
 
        netdev_reset_queue(bgmac->net_dev);
 }
@@ -1174,50 +1006,51 @@ static void bgmac_chip_intrs_off(struct bgmac *bgmac)
 /* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/gmac_enable */
 static void bgmac_enable(struct bgmac *bgmac)
 {
-       struct bcma_chipinfo *ci = &bgmac->core->bus->chipinfo;
+       u32 cmdcfg_sr;
        u32 cmdcfg;
        u32 mode;
-       u32 rxq_ctl;
-       u32 fl_ctl;
-       u16 bp_clk;
-       u8 mdp;
+
+       if (bgmac->feature_flags & BGMAC_FEAT_CMDCFG_SR_REV4)
+               cmdcfg_sr = BGMAC_CMDCFG_SR_REV4;
+       else
+               cmdcfg_sr = BGMAC_CMDCFG_SR_REV0;
 
        cmdcfg = bgmac_read(bgmac, BGMAC_CMDCFG);
        bgmac_cmdcfg_maskset(bgmac, ~(BGMAC_CMDCFG_TE | BGMAC_CMDCFG_RE),
-                            BGMAC_CMDCFG_SR(bgmac->core->id.rev), true);
+                            cmdcfg_sr, true);
        udelay(2);
        cmdcfg |= BGMAC_CMDCFG_TE | BGMAC_CMDCFG_RE;
        bgmac_write(bgmac, BGMAC_CMDCFG, cmdcfg);
 
        mode = (bgmac_read(bgmac, BGMAC_DEV_STATUS) & BGMAC_DS_MM_MASK) >>
                BGMAC_DS_MM_SHIFT;
-       if (ci->id != BCMA_CHIP_ID_BCM47162 || mode != 0)
+       if (bgmac->feature_flags & BGMAC_FEAT_CLKCTLST || mode != 0)
                bgmac_set(bgmac, BCMA_CLKCTLST, BCMA_CLKCTLST_FORCEHT);
-       if (ci->id == BCMA_CHIP_ID_BCM47162 && mode == 2)
-               bcma_chipco_chipctl_maskset(&bgmac->core->bus->drv_cc, 1, ~0,
-                                           BGMAC_CHIPCTL_1_RXC_DLL_BYPASS);
-
-       switch (ci->id) {
-       case BCMA_CHIP_ID_BCM5357:
-       case BCMA_CHIP_ID_BCM4749:
-       case BCMA_CHIP_ID_BCM53572:
-       case BCMA_CHIP_ID_BCM4716:
-       case BCMA_CHIP_ID_BCM47162:
-               fl_ctl = 0x03cb04cb;
-               if (ci->id == BCMA_CHIP_ID_BCM5357 ||
-                   ci->id == BCMA_CHIP_ID_BCM4749 ||
-                   ci->id == BCMA_CHIP_ID_BCM53572)
+       if (bgmac->feature_flags & BGMAC_FEAT_CLKCTLST && mode == 2)
+               bgmac_cco_ctl_maskset(bgmac, 1, ~0,
+                                     BGMAC_CHIPCTL_1_RXC_DLL_BYPASS);
+
+       if (bgmac->feature_flags & (BGMAC_FEAT_FLW_CTRL1 |
+                                   BGMAC_FEAT_FLW_CTRL2)) {
+               u32 fl_ctl;
+
+               if (bgmac->feature_flags & BGMAC_FEAT_FLW_CTRL1)
                        fl_ctl = 0x2300e1;
+               else
+                       fl_ctl = 0x03cb04cb;
+
                bgmac_write(bgmac, BGMAC_FLOW_CTL_THRESH, fl_ctl);
                bgmac_write(bgmac, BGMAC_PAUSE_CTL, 0x27fff);
-               break;
        }
 
-       if (!bgmac_is_bcm4707_family(bgmac)) {
+       if (bgmac->feature_flags & BGMAC_FEAT_SET_RXQ_CLK) {
+               u32 rxq_ctl;
+               u16 bp_clk;
+               u8 mdp;
+
                rxq_ctl = bgmac_read(bgmac, BGMAC_RXQ_CTL);
                rxq_ctl &= ~BGMAC_RXQ_CTL_MDP_MASK;
-               bp_clk = bcma_pmu_get_bus_clock(&bgmac->core->bus->drv_cc) /
-                               1000000;
+               bp_clk = bgmac_get_bus_clock(bgmac) / 1000000;
                mdp = (bp_clk * 128 / 1000) - 3;
                rxq_ctl |= (mdp << BGMAC_RXQ_CTL_MDP_SHIFT);
                bgmac_write(bgmac, BGMAC_RXQ_CTL, rxq_ctl);
@@ -1261,7 +1094,7 @@ static irqreturn_t bgmac_interrupt(int irq, void *dev_id)
 
        int_status &= ~(BGMAC_IS_TX0 | BGMAC_IS_RX);
        if (int_status)
-               bgmac_err(bgmac, "Unknown IRQs: 0x%08X\n", int_status);
+               dev_err(bgmac->dev, "Unknown IRQs: 0x%08X\n", int_status);
 
        /* Disable new interrupts until handling existing ones */
        bgmac_chip_intrs_off(bgmac);
@@ -1312,10 +1145,10 @@ static int bgmac_open(struct net_device *net_dev)
        /* Specs say about reclaiming rings here, but we do that in DMA init */
        bgmac_chip_init(bgmac);
 
-       err = request_irq(bgmac->core->irq, bgmac_interrupt, IRQF_SHARED,
+       err = request_irq(bgmac->irq, bgmac_interrupt, IRQF_SHARED,
                          KBUILD_MODNAME, net_dev);
        if (err < 0) {
-               bgmac_err(bgmac, "IRQ request error: %d!\n", err);
+               dev_err(bgmac->dev, "IRQ request error: %d!\n", err);
                bgmac_dma_cleanup(bgmac);
                return err;
        }
@@ -1338,7 +1171,7 @@ static int bgmac_stop(struct net_device *net_dev)
 
        napi_disable(&bgmac->napi);
        bgmac_chip_intrs_off(bgmac);
-       free_irq(bgmac->core->irq, net_dev);
+       free_irq(bgmac->irq, net_dev);
 
        bgmac_chip_reset(bgmac);
        bgmac_dma_cleanup(bgmac);
@@ -1517,7 +1350,7 @@ static void bgmac_get_drvinfo(struct net_device *net_dev,
                              struct ethtool_drvinfo *info)
 {
        strlcpy(info->driver, KBUILD_MODNAME, sizeof(info->driver));
-       strlcpy(info->bus_info, "BCMA", sizeof(info->bus_info));
+       strlcpy(info->bus_info, "AXI", sizeof(info->bus_info));
 }
 
 static const struct ethtool_ops bgmac_ethtool_ops = {
@@ -1533,17 +1366,6 @@ static const struct ethtool_ops bgmac_ethtool_ops = {
  * MII
  **************************************************/
 
-static int bgmac_mii_read(struct mii_bus *bus, int mii_id, int regnum)
-{
-       return bgmac_phy_read(bus->priv, mii_id, regnum);
-}
-
-static int bgmac_mii_write(struct mii_bus *bus, int mii_id, int regnum,
-                          u16 value)
-{
-       return bgmac_phy_write(bus->priv, mii_id, regnum, value);
-}
-
 static void bgmac_adjust_link(struct net_device *net_dev)
 {
        struct bgmac *bgmac = netdev_priv(net_dev);
@@ -1568,7 +1390,7 @@ static void bgmac_adjust_link(struct net_device *net_dev)
        }
 }
 
-static int bgmac_fixed_phy_register(struct bgmac *bgmac)
+static int bgmac_phy_connect_direct(struct bgmac *bgmac)
 {
        struct fixed_phy_status fphy_status = {
                .link = 1,
@@ -1580,194 +1402,76 @@ static int bgmac_fixed_phy_register(struct bgmac *bgmac)
 
        phy_dev = fixed_phy_register(PHY_POLL, &fphy_status, -1, NULL);
        if (!phy_dev || IS_ERR(phy_dev)) {
-               bgmac_err(bgmac, "Failed to register fixed PHY device\n");
+               dev_err(bgmac->dev, "Failed to register fixed PHY device\n");
                return -ENODEV;
        }
 
        err = phy_connect_direct(bgmac->net_dev, phy_dev, bgmac_adjust_link,
                                 PHY_INTERFACE_MODE_MII);
        if (err) {
-               bgmac_err(bgmac, "Connecting PHY failed\n");
+               dev_err(bgmac->dev, "Connecting PHY failed\n");
                return err;
        }
 
        return err;
 }
 
-static int bgmac_mii_register(struct bgmac *bgmac)
+static int bgmac_phy_connect(struct bgmac *bgmac)
 {
-       struct mii_bus *mii_bus;
        struct phy_device *phy_dev;
        char bus_id[MII_BUS_ID_SIZE + 3];
-       int err = 0;
-
-       if (bgmac_is_bcm4707_family(bgmac))
-               return bgmac_fixed_phy_register(bgmac);
-
-       mii_bus = mdiobus_alloc();
-       if (!mii_bus)
-               return -ENOMEM;
-
-       mii_bus->name = "bgmac mii bus";
-       sprintf(mii_bus->id, "%s-%d-%d", "bgmac", bgmac->core->bus->num,
-               bgmac->core->core_unit);
-       mii_bus->priv = bgmac;
-       mii_bus->read = bgmac_mii_read;
-       mii_bus->write = bgmac_mii_write;
-       mii_bus->parent = &bgmac->core->dev;
-       mii_bus->phy_mask = ~(1 << bgmac->phyaddr);
-
-       err = mdiobus_register(mii_bus);
-       if (err) {
-               bgmac_err(bgmac, "Registration of mii bus failed\n");
-               goto err_free_bus;
-       }
-
-       bgmac->mii_bus = mii_bus;
 
        /* Connect to the PHY */
-       snprintf(bus_id, sizeof(bus_id), PHY_ID_FMT, mii_bus->id,
+       snprintf(bus_id, sizeof(bus_id), PHY_ID_FMT, bgmac->mii_bus->id,
                 bgmac->phyaddr);
        phy_dev = phy_connect(bgmac->net_dev, bus_id, &bgmac_adjust_link,
                              PHY_INTERFACE_MODE_MII);
        if (IS_ERR(phy_dev)) {
-               bgmac_err(bgmac, "PHY connection failed\n");
-               err = PTR_ERR(phy_dev);
-               goto err_unregister_bus;
+               dev_err(bgmac->dev, "PHY connecton failed\n");
+               return PTR_ERR(phy_dev);
        }
 
-       return err;
-
-err_unregister_bus:
-       mdiobus_unregister(mii_bus);
-err_free_bus:
-       mdiobus_free(mii_bus);
-       return err;
-}
-
-static void bgmac_mii_unregister(struct bgmac *bgmac)
-{
-       struct mii_bus *mii_bus = bgmac->mii_bus;
-
-       mdiobus_unregister(mii_bus);
-       mdiobus_free(mii_bus);
+       return 0;
 }
 
-/**************************************************
- * BCMA bus ops
- **************************************************/
-
-/* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/chipattach */
-static int bgmac_probe(struct bcma_device *core)
+int bgmac_enet_probe(struct bgmac *info)
 {
        struct net_device *net_dev;
        struct bgmac *bgmac;
-       struct ssb_sprom *sprom = &core->bus->sprom;
-       u8 *mac;
        int err;
 
-       switch (core->core_unit) {
-       case 0:
-               mac = sprom->et0mac;
-               break;
-       case 1:
-               mac = sprom->et1mac;
-               break;
-       case 2:
-               mac = sprom->et2mac;
-               break;
-       default:
-               pr_err("Unsupported core_unit %d\n", core->core_unit);
-               return -ENOTSUPP;
-       }
-
-       if (!is_valid_ether_addr(mac)) {
-               dev_err(&core->dev, "Invalid MAC addr: %pM\n", mac);
-               eth_random_addr(mac);
-               dev_warn(&core->dev, "Using random MAC: %pM\n", mac);
-       }
-
-       /* This (reset &) enable is not preset in specs or reference driver but
-        * Broadcom does it in arch PCI code when enabling fake PCI device.
-        */
-       bcma_core_enable(core, 0);
-
        /* Allocation and references */
        net_dev = alloc_etherdev(sizeof(*bgmac));
        if (!net_dev)
                return -ENOMEM;
+
        net_dev->netdev_ops = &bgmac_netdev_ops;
-       net_dev->irq = core->irq;
        net_dev->ethtool_ops = &bgmac_ethtool_ops;
        bgmac = netdev_priv(net_dev);
+       memcpy(bgmac, info, sizeof(*bgmac));
        bgmac->net_dev = net_dev;
-       bgmac->core = core;
-       bcma_set_drvdata(core, bgmac);
-       SET_NETDEV_DEV(net_dev, &core->dev);
-
-       /* Defaults */
-       memcpy(bgmac->net_dev->dev_addr, mac, ETH_ALEN);
-
-       /* On BCM4706 we need common core to access PHY */
-       if (core->id.id == BCMA_CORE_4706_MAC_GBIT &&
-           !core->bus->drv_gmac_cmn.core) {
-               bgmac_err(bgmac, "GMAC CMN core not found (required for BCM4706)\n");
-               err = -ENODEV;
-               goto err_netdev_free;
+       net_dev->irq = bgmac->irq;
+       SET_NETDEV_DEV(net_dev, bgmac->dev);
+
+       if (!is_valid_ether_addr(bgmac->mac_addr)) {
+               dev_err(bgmac->dev, "Invalid MAC addr: %pM\n",
+                       bgmac->mac_addr);
+               eth_random_addr(bgmac->mac_addr);
+               dev_warn(bgmac->dev, "Using random MAC: %pM\n",
+                        bgmac->mac_addr);
        }
-       bgmac->cmn = core->bus->drv_gmac_cmn.core;
+       ether_addr_copy(net_dev->dev_addr, bgmac->mac_addr);
 
-       switch (core->core_unit) {
-       case 0:
-               bgmac->phyaddr = sprom->et0phyaddr;
-               break;
-       case 1:
-               bgmac->phyaddr = sprom->et1phyaddr;
-               break;
-       case 2:
-               bgmac->phyaddr = sprom->et2phyaddr;
-               break;
-       }
-       bgmac->phyaddr &= BGMAC_PHY_MASK;
-       if (bgmac->phyaddr == BGMAC_PHY_MASK) {
-               bgmac_err(bgmac, "No PHY found\n");
-               err = -ENODEV;
-               goto err_netdev_free;
-       }
-       bgmac_info(bgmac, "Found PHY addr: %d%s\n", bgmac->phyaddr,
-                  bgmac->phyaddr == BGMAC_PHY_NOREGS ? " (NOREGS)" : "");
-
-       if (core->bus->hosttype == BCMA_HOSTTYPE_PCI) {
-               bgmac_err(bgmac, "PCI setup not implemented\n");
-               err = -ENOTSUPP;
-               goto err_netdev_free;
-       }
+       /* This (reset &) enable is not preset in specs or reference driver but
+        * Broadcom does it in arch PCI code when enabling fake PCI device.
+        */
+       bgmac_clk_enable(bgmac, 0);
 
        bgmac_chip_reset(bgmac);
 
-       /* For Northstar, we have to take all GMAC core out of reset */
-       if (bgmac_is_bcm4707_family(bgmac)) {
-               struct bcma_device *ns_core;
-               int ns_gmac;
-
-               /* Northstar has 4 GMAC cores */
-               for (ns_gmac = 0; ns_gmac < 4; ns_gmac++) {
-                       /* As Northstar requirement, we have to reset all GMACs
-                        * before accessing one. bgmac_chip_reset() call
-                        * bcma_core_enable() for this core. Then the other
-                        * three GMACs didn't reset.  We do it here.
-                        */
-                       ns_core = bcma_find_core_unit(core->bus,
-                                                     BCMA_CORE_MAC_GBIT,
-                                                     ns_gmac);
-                       if (ns_core && !bcma_core_is_enabled(ns_core))
-                               bcma_core_enable(ns_core, 0);
-               }
-       }
-
        err = bgmac_dma_alloc(bgmac);
        if (err) {
-               bgmac_err(bgmac, "Unable to alloc memory for DMA\n");
+               dev_err(bgmac->dev, "Unable to alloc memory for DMA\n");
                goto err_netdev_free;
        }
 
@@ -1775,22 +1479,14 @@ static int bgmac_probe(struct bcma_device *core)
        if (bcm47xx_nvram_getenv("et0_no_txint", NULL, 0) == 0)
                bgmac->int_mask &= ~BGMAC_IS_TX_MASK;
 
-       /* TODO: reset the external phy. Specs are needed */
-       bgmac_phy_reset(bgmac);
-
-       bgmac->has_robosw = !!(core->bus->sprom.boardflags_lo &
-                              BGMAC_BFL_ENETROBO);
-       if (bgmac->has_robosw)
-               bgmac_warn(bgmac, "Support for Roboswitch not implemented\n");
-
-       if (core->bus->sprom.boardflags_lo & BGMAC_BFL_ENETADM)
-               bgmac_warn(bgmac, "Support for ADMtek ethernet switch not implemented\n");
-
        netif_napi_add(net_dev, &bgmac->napi, bgmac_poll, BGMAC_WEIGHT);
 
-       err = bgmac_mii_register(bgmac);
+       if (!bgmac->mii_bus)
+               err = bgmac_phy_connect_direct(bgmac);
+       else
+               err = bgmac_phy_connect(bgmac);
        if (err) {
-               bgmac_err(bgmac, "Cannot register MDIO\n");
+               dev_err(bgmac->dev, "Cannot connect to phy\n");
                goto err_dma_free;
        }
 
@@ -1800,64 +1496,34 @@ static int bgmac_probe(struct bcma_device *core)
 
        err = register_netdev(bgmac->net_dev);
        if (err) {
-               bgmac_err(bgmac, "Cannot register net device\n");
-               goto err_mii_unregister;
+               dev_err(bgmac->dev, "Cannot register net device\n");
+               goto err_phy_disconnect;
        }
 
        netif_carrier_off(net_dev);
 
        return 0;
 
-err_mii_unregister:
-       bgmac_mii_unregister(bgmac);
+err_phy_disconnect:
+       phy_disconnect(net_dev->phydev);
 err_dma_free:
        bgmac_dma_free(bgmac);
-
 err_netdev_free:
-       bcma_set_drvdata(core, NULL);
        free_netdev(net_dev);
 
        return err;
 }
+EXPORT_SYMBOL_GPL(bgmac_enet_probe);
 
-static void bgmac_remove(struct bcma_device *core)
+void bgmac_enet_remove(struct bgmac *bgmac)
 {
-       struct bgmac *bgmac = bcma_get_drvdata(core);
-
        unregister_netdev(bgmac->net_dev);
-       bgmac_mii_unregister(bgmac);
+       phy_disconnect(bgmac->net_dev->phydev);
        netif_napi_del(&bgmac->napi);
        bgmac_dma_free(bgmac);
-       bcma_set_drvdata(core, NULL);
        free_netdev(bgmac->net_dev);
 }
-
-static struct bcma_driver bgmac_bcma_driver = {
-       .name           = KBUILD_MODNAME,
-       .id_table       = bgmac_bcma_tbl,
-       .probe          = bgmac_probe,
-       .remove         = bgmac_remove,
-};
-
-static int __init bgmac_init(void)
-{
-       int err;
-
-       err = bcma_driver_register(&bgmac_bcma_driver);
-       if (err)
-               return err;
-       pr_info("Broadcom 47xx GBit MAC driver loaded\n");
-
-       return 0;
-}
-
-static void __exit bgmac_exit(void)
-{
-       bcma_driver_unregister(&bgmac_bcma_driver);
-}
-
-module_init(bgmac_init)
-module_exit(bgmac_exit)
+EXPORT_SYMBOL_GPL(bgmac_enet_remove);
 
 MODULE_AUTHOR("RafaÅ‚ MiÅ‚ecki");
 MODULE_LICENSE("GPL");
index 99beb18..24a2502 100644 (file)
@@ -1,19 +1,6 @@
 #ifndef _BGMAC_H
 #define _BGMAC_H
 
-#define pr_fmt(fmt)            KBUILD_MODNAME ": " fmt
-
-#define bgmac_err(bgmac, fmt, ...) \
-       dev_err(&(bgmac)->core->dev, fmt, ##__VA_ARGS__)
-#define bgmac_warn(bgmac, fmt, ...) \
-       dev_warn(&(bgmac)->core->dev, fmt,  ##__VA_ARGS__)
-#define bgmac_info(bgmac, fmt, ...) \
-       dev_info(&(bgmac)->core->dev, fmt,  ##__VA_ARGS__)
-#define bgmac_dbg(bgmac, fmt, ...) \
-       dev_dbg(&(bgmac)->core->dev, fmt, ##__VA_ARGS__)
-
-#include <linux/bcma/bcma.h>
-#include <linux/brcmphy.h>
 #include <linux/netdevice.h>
 
 #define BGMAC_DEV_CTL                          0x000
 #define  BGMAC_CMDCFG_HD_SHIFT                 10
 #define  BGMAC_CMDCFG_SR_REV0                  0x00000800      /* Set to reset mode, for core rev 0-3 */
 #define  BGMAC_CMDCFG_SR_REV4                  0x00002000      /* Set to reset mode, for core rev >= 4 */
-#define  BGMAC_CMDCFG_SR(rev)  ((rev >= 4) ? BGMAC_CMDCFG_SR_REV4 : BGMAC_CMDCFG_SR_REV0)
 #define  BGMAC_CMDCFG_ML                       0x00008000      /* Set to activate mac loopback mode */
 #define  BGMAC_CMDCFG_AE                       0x00400000
 #define  BGMAC_CMDCFG_CFE                      0x00800000
 
 #define ETHER_MAX_LEN   1518
 
+/* Feature Flags */
+#define BGMAC_FEAT_TX_MASK_SETUP       BIT(0)
+#define BGMAC_FEAT_RX_MASK_SETUP       BIT(1)
+#define BGMAC_FEAT_IOST_ATTACHED       BIT(2)
+#define BGMAC_FEAT_NO_RESET            BIT(3)
+#define BGMAC_FEAT_MISC_PLL_REQ                BIT(4)
+#define BGMAC_FEAT_SW_TYPE_PHY         BIT(5)
+#define BGMAC_FEAT_SW_TYPE_EPHYRMII    BIT(6)
+#define BGMAC_FEAT_SW_TYPE_RGMII       BIT(7)
+#define BGMAC_FEAT_CMN_PHY_CTL         BIT(8)
+#define BGMAC_FEAT_FLW_CTRL1           BIT(9)
+#define BGMAC_FEAT_FLW_CTRL2           BIT(10)
+#define BGMAC_FEAT_SET_RXQ_CLK         BIT(11)
+#define BGMAC_FEAT_CLKCTLST            BIT(12)
+#define BGMAC_FEAT_NO_CLR_MIB          BIT(13)
+#define BGMAC_FEAT_FORCE_SPEED_2500    BIT(14)
+#define BGMAC_FEAT_CMDCFG_SR_REV4      BIT(15)
+
 struct bgmac_slot_info {
        union {
                struct sk_buff *skb;
@@ -436,8 +440,23 @@ struct bgmac_rx_header {
 };
 
 struct bgmac {
-       struct bcma_device *core;
-       struct bcma_device *cmn; /* Reference to CMN core for BCM4706 */
+       union {
+               struct {
+                       void *base;
+                       void *idm_base;
+               } plat;
+               struct {
+                       struct bcma_device *core;
+                       /* Reference to CMN core for BCM4706 */
+                       struct bcma_device *cmn;
+               } bcma;
+       };
+
+       struct device *dev;
+       struct device *dma_dev;
+       unsigned char mac_addr[ETH_ALEN];
+       u32 feature_flags;
+
        struct net_device *net_dev;
        struct napi_struct napi;
        struct mii_bus *mii_bus;
@@ -452,6 +471,7 @@ struct bgmac {
        u32 mib_rx_regs[BGMAC_NUM_MIB_RX_REGS];
 
        /* Int */
+       int irq;
        u32 int_mask;
 
        /* Current MAC state */
@@ -462,16 +482,71 @@ struct bgmac {
        bool has_robosw;
 
        bool loopback;
+
+       u32 (*read)(struct bgmac *bgmac, u16 offset);
+       void (*write)(struct bgmac *bgmac, u16 offset, u32 value);
+       u32 (*idm_read)(struct bgmac *bgmac, u16 offset);
+       void (*idm_write)(struct bgmac *bgmac, u16 offset, u32 value);
+       bool (*clk_enabled)(struct bgmac *bgmac);
+       void (*clk_enable)(struct bgmac *bgmac, u32 flags);
+       void (*cco_ctl_maskset)(struct bgmac *bgmac, u32 offset, u32 mask,
+                               u32 set);
+       u32 (*get_bus_clock)(struct bgmac *bgmac);
+       void (*cmn_maskset32)(struct bgmac *bgmac, u16 offset, u32 mask,
+                             u32 set);
 };
 
+int bgmac_enet_probe(struct bgmac *info);
+void bgmac_enet_remove(struct bgmac *bgmac);
+
+struct mii_bus *bcma_mdio_mii_register(struct bcma_device *core, u8 phyaddr);
+void bcma_mdio_mii_unregister(struct mii_bus *mii_bus);
+
 static inline u32 bgmac_read(struct bgmac *bgmac, u16 offset)
 {
-       return bcma_read32(bgmac->core, offset);
+       return bgmac->read(bgmac, offset);
 }
 
 static inline void bgmac_write(struct bgmac *bgmac, u16 offset, u32 value)
 {
-       bcma_write32(bgmac->core, offset, value);
+       bgmac->write(bgmac, offset, value);
+}
+
+static inline u32 bgmac_idm_read(struct bgmac *bgmac, u16 offset)
+{
+       return bgmac->idm_read(bgmac, offset);
+}
+
+static inline void bgmac_idm_write(struct bgmac *bgmac, u16 offset, u32 value)
+{
+       bgmac->idm_write(bgmac, offset, value);
+}
+
+static inline bool bgmac_clk_enabled(struct bgmac *bgmac)
+{
+       return bgmac->clk_enabled(bgmac);
+}
+
+static inline void bgmac_clk_enable(struct bgmac *bgmac, u32 flags)
+{
+       bgmac->clk_enable(bgmac, flags);
+}
+
+static inline void bgmac_cco_ctl_maskset(struct bgmac *bgmac, u32 offset,
+                                        u32 mask, u32 set)
+{
+       bgmac->cco_ctl_maskset(bgmac, offset, mask, set);
+}
+
+static inline u32 bgmac_get_bus_clock(struct bgmac *bgmac)
+{
+       return bgmac->get_bus_clock(bgmac);
+}
+
+static inline void bgmac_cmn_maskset32(struct bgmac *bgmac, u16 offset,
+                                      u32 mask, u32 set)
+{
+       bgmac->cmn_maskset32(bgmac, offset, mask, set);
 }
 
 static inline void bgmac_maskset(struct bgmac *bgmac, u16 offset, u32 mask,
@@ -489,5 +564,4 @@ static inline void bgmac_set(struct bgmac *bgmac, u16 offset, u32 set)
 {
        bgmac_maskset(bgmac, offset, ~0, set);
 }
-
 #endif /* _BGMAC_H */
index 70b148a..8a0165b 100644 (file)
@@ -73,6 +73,7 @@ enum board_idx {
        BCM57301,
        BCM57302,
        BCM57304,
+       BCM58700,
        BCM57311,
        BCM57312,
        BCM57402,
@@ -98,6 +99,7 @@ static const struct {
        { "Broadcom BCM57301 NetXtreme-C Single-port 10Gb Ethernet" },
        { "Broadcom BCM57302 NetXtreme-C Dual-port 10Gb/25Gb Ethernet" },
        { "Broadcom BCM57304 NetXtreme-C Dual-port 10Gb/25Gb/40Gb/50Gb Ethernet" },
+       { "Broadcom BCM58700 Nitro 4-port 1Gb/2.5Gb/10Gb Ethernet" },
        { "Broadcom BCM57311 NetXtreme-C Single-port 10Gb Ethernet" },
        { "Broadcom BCM57312 NetXtreme-C Dual-port 10Gb/25Gb Ethernet" },
        { "Broadcom BCM57402 NetXtreme-E Dual-port 10Gb Ethernet" },
@@ -120,6 +122,7 @@ static const struct pci_device_id bnxt_pci_tbl[] = {
        { PCI_VDEVICE(BROADCOM, 0x16c8), .driver_data = BCM57301 },
        { PCI_VDEVICE(BROADCOM, 0x16c9), .driver_data = BCM57302 },
        { PCI_VDEVICE(BROADCOM, 0x16ca), .driver_data = BCM57304 },
+       { PCI_VDEVICE(BROADCOM, 0x16cd), .driver_data = BCM58700 },
        { PCI_VDEVICE(BROADCOM, 0x16ce), .driver_data = BCM57311 },
        { PCI_VDEVICE(BROADCOM, 0x16cf), .driver_data = BCM57312 },
        { PCI_VDEVICE(BROADCOM, 0x16d0), .driver_data = BCM57402 },
@@ -997,7 +1000,7 @@ static struct sk_buff *bnxt_gro_func_5731x(struct bnxt_tpa_info *tpa_info,
                 * correct protocol ID, it must be a loopback packet where
                 * the offsets are off by 4.
                 */
-               if (proto != htons(ETH_P_IP) && proto && htons(ETH_P_IPV6))
+               if (proto != htons(ETH_P_IP) && proto != htons(ETH_P_IPV6))
                        loopback = true;
        }
        if (loopback) {
@@ -1668,6 +1671,76 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
        return rx_pkts;
 }
 
+static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
+{
+       struct bnxt_napi *bnapi = container_of(napi, struct bnxt_napi, napi);
+       struct bnxt *bp = bnapi->bp;
+       struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;
+       struct bnxt_rx_ring_info *rxr = bnapi->rx_ring;
+       struct tx_cmp *txcmp;
+       struct rx_cmp_ext *rxcmp1;
+       u32 cp_cons, tmp_raw_cons;
+       u32 raw_cons = cpr->cp_raw_cons;
+       u32 rx_pkts = 0;
+       bool agg_event = false;
+
+       while (1) {
+               int rc;
+
+               cp_cons = RING_CMP(raw_cons);
+               txcmp = &cpr->cp_desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)];
+
+               if (!TX_CMP_VALID(txcmp, raw_cons))
+                       break;
+
+               if ((TX_CMP_TYPE(txcmp) & 0x30) == 0x10) {
+                       tmp_raw_cons = NEXT_RAW_CMP(raw_cons);
+                       cp_cons = RING_CMP(tmp_raw_cons);
+                       rxcmp1 = (struct rx_cmp_ext *)
+                         &cpr->cp_desc_ring[CP_RING(cp_cons)][CP_IDX(cp_cons)];
+
+                       if (!RX_CMP_VALID(rxcmp1, tmp_raw_cons))
+                               break;
+
+                       /* force an error to recycle the buffer */
+                       rxcmp1->rx_cmp_cfa_code_errors_v2 |=
+                               cpu_to_le32(RX_CMPL_ERRORS_CRC_ERROR);
+
+                       rc = bnxt_rx_pkt(bp, bnapi, &raw_cons, &agg_event);
+                       if (likely(rc == -EIO))
+                               rx_pkts++;
+                       else if (rc == -EBUSY)  /* partial completion */
+                               break;
+               } else if (unlikely(TX_CMP_TYPE(txcmp) ==
+                                   CMPL_BASE_TYPE_HWRM_DONE)) {
+                       bnxt_hwrm_handler(bp, txcmp);
+               } else {
+                       netdev_err(bp->dev,
+                                  "Invalid completion received on special ring\n");
+               }
+               raw_cons = NEXT_RAW_CMP(raw_cons);
+
+               if (rx_pkts == budget)
+                       break;
+       }
+
+       cpr->cp_raw_cons = raw_cons;
+       BNXT_CP_DB(cpr->cp_doorbell, cpr->cp_raw_cons);
+       writel(DB_KEY_RX | rxr->rx_prod, rxr->rx_doorbell);
+       writel(DB_KEY_RX | rxr->rx_prod, rxr->rx_doorbell);
+
+       if (agg_event) {
+               writel(DB_KEY_RX | rxr->rx_agg_prod, rxr->rx_agg_doorbell);
+               writel(DB_KEY_RX | rxr->rx_agg_prod, rxr->rx_agg_doorbell);
+       }
+
+       if (!bnxt_has_work(bp, cpr) && rx_pkts < budget) {
+               napi_complete(napi);
+               BNXT_CP_DB_REARM(cpr->cp_doorbell, cpr->cp_raw_cons);
+       }
+       return rx_pkts;
+}
+
 static int bnxt_poll(struct napi_struct *napi, int budget)
 {
        struct bnxt_napi *bnapi = container_of(napi, struct bnxt_napi, napi);
@@ -2340,6 +2413,9 @@ static int bnxt_alloc_vnics(struct bnxt *bp)
                num_vnics += bp->rx_nr_rings;
 #endif
 
+       if (BNXT_CHIP_TYPE_NITRO_A0(bp))
+               num_vnics++;
+
        bp->vnic_info = kcalloc(num_vnics, sizeof(struct bnxt_vnic_info),
                                GFP_KERNEL);
        if (!bp->vnic_info)
@@ -2357,7 +2433,8 @@ static void bnxt_init_vnics(struct bnxt *bp)
                struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
 
                vnic->fw_vnic_id = INVALID_HW_RING_ID;
-               vnic->fw_rss_cos_lb_ctx = INVALID_HW_RING_ID;
+               vnic->fw_rss_cos_lb_ctx[0] = INVALID_HW_RING_ID;
+               vnic->fw_rss_cos_lb_ctx[1] = INVALID_HW_RING_ID;
                vnic->fw_l2_ctx_id = INVALID_HW_RING_ID;
 
                if (bp->vnic_info[i].rss_hash_key) {
@@ -2661,7 +2738,7 @@ static int bnxt_alloc_stats(struct bnxt *bp)
                cpr->hw_stats_ctx_id = INVALID_STATS_CTX_ID;
        }
 
-       if (BNXT_PF(bp)) {
+       if (BNXT_PF(bp) && bp->chip_num != CHIP_NUM_58700) {
                bp->hw_port_stats_size = sizeof(struct rx_port_stats) +
                                         sizeof(struct tx_port_stats) + 1024;
 
@@ -3200,8 +3277,10 @@ static int bnxt_hwrm_set_vnic_filter(struct bnxt *bp, u16 vnic_id, u16 idx,
        struct hwrm_cfa_l2_filter_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
        bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_CFA_L2_FILTER_ALLOC, -1, -1);
-       req.flags = cpu_to_le32(CFA_L2_FILTER_ALLOC_REQ_FLAGS_PATH_RX |
-                               CFA_L2_FILTER_ALLOC_REQ_FLAGS_OUTERMOST);
+       req.flags = cpu_to_le32(CFA_L2_FILTER_ALLOC_REQ_FLAGS_PATH_RX);
+       if (!BNXT_CHIP_TYPE_NITRO_A0(bp))
+               req.flags |=
+                       cpu_to_le32(CFA_L2_FILTER_ALLOC_REQ_FLAGS_OUTERMOST);
        req.dst_id = cpu_to_le16(bp->vnic_info[vnic_id].fw_vnic_id);
        req.enables =
                cpu_to_le32(CFA_L2_FILTER_ALLOC_REQ_ENABLES_L2_ADDR |
@@ -3308,7 +3387,7 @@ static int bnxt_hwrm_vnic_set_rss(struct bnxt *bp, u16 vnic_id, bool set_rss)
        struct bnxt_vnic_info *vnic = &bp->vnic_info[vnic_id];
        struct hwrm_vnic_rss_cfg_input req = {0};
 
-       if (vnic->fw_rss_cos_lb_ctx == INVALID_HW_RING_ID)
+       if (vnic->fw_rss_cos_lb_ctx[0] == INVALID_HW_RING_ID)
                return 0;
 
        bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_VNIC_RSS_CFG, -1, -1);
@@ -3320,10 +3399,14 @@ static int bnxt_hwrm_vnic_set_rss(struct bnxt *bp, u16 vnic_id, bool set_rss)
 
                req.hash_type = cpu_to_le32(vnic->hash_type);
 
-               if (vnic->flags & BNXT_VNIC_RSS_FLAG)
-                       max_rings = bp->rx_nr_rings;
-               else
+               if (vnic->flags & BNXT_VNIC_RSS_FLAG) {
+                       if (BNXT_CHIP_TYPE_NITRO_A0(bp))
+                               max_rings = bp->rx_nr_rings - 1;
+                       else
+                               max_rings = bp->rx_nr_rings;
+               } else {
                        max_rings = 1;
+               }
 
                /* Fill the RSS indirection table with ring group ids */
                for (i = 0, j = 0; i < HW_HASH_INDEX_SIZE; i++, j++) {
@@ -3336,7 +3419,7 @@ static int bnxt_hwrm_vnic_set_rss(struct bnxt *bp, u16 vnic_id, bool set_rss)
                req.hash_key_tbl_addr =
                        cpu_to_le64(vnic->rss_hash_key_dma_addr);
        }
-       req.rss_ctx_idx = cpu_to_le16(vnic->fw_rss_cos_lb_ctx);
+       req.rss_ctx_idx = cpu_to_le16(vnic->fw_rss_cos_lb_ctx[0]);
        return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
 }
 
@@ -3359,32 +3442,35 @@ static int bnxt_hwrm_vnic_set_hds(struct bnxt *bp, u16 vnic_id)
        return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
 }
 
-static void bnxt_hwrm_vnic_ctx_free_one(struct bnxt *bp, u16 vnic_id)
+static void bnxt_hwrm_vnic_ctx_free_one(struct bnxt *bp, u16 vnic_id,
+                                       u16 ctx_idx)
 {
        struct hwrm_vnic_rss_cos_lb_ctx_free_input req = {0};
 
        bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_VNIC_RSS_COS_LB_CTX_FREE, -1, -1);
        req.rss_cos_lb_ctx_id =
-               cpu_to_le16(bp->vnic_info[vnic_id].fw_rss_cos_lb_ctx);
+               cpu_to_le16(bp->vnic_info[vnic_id].fw_rss_cos_lb_ctx[ctx_idx]);
 
        hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
-       bp->vnic_info[vnic_id].fw_rss_cos_lb_ctx = INVALID_HW_RING_ID;
+       bp->vnic_info[vnic_id].fw_rss_cos_lb_ctx[ctx_idx] = INVALID_HW_RING_ID;
 }
 
 static void bnxt_hwrm_vnic_ctx_free(struct bnxt *bp)
 {
-       int i;
+       int i, j;
 
        for (i = 0; i < bp->nr_vnics; i++) {
                struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
 
-               if (vnic->fw_rss_cos_lb_ctx != INVALID_HW_RING_ID)
-                       bnxt_hwrm_vnic_ctx_free_one(bp, i);
+               for (j = 0; j < BNXT_MAX_CTX_PER_VNIC; j++) {
+                       if (vnic->fw_rss_cos_lb_ctx[j] != INVALID_HW_RING_ID)
+                               bnxt_hwrm_vnic_ctx_free_one(bp, i, j);
+               }
        }
        bp->rsscos_nr_ctxs = 0;
 }
 
-static int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp, u16 vnic_id)
+static int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp, u16 vnic_id, u16 ctx_idx)
 {
        int rc;
        struct hwrm_vnic_rss_cos_lb_ctx_alloc_input req = {0};
@@ -3397,7 +3483,7 @@ static int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp, u16 vnic_id)
        mutex_lock(&bp->hwrm_cmd_lock);
        rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
        if (!rc)
-               bp->vnic_info[vnic_id].fw_rss_cos_lb_ctx =
+               bp->vnic_info[vnic_id].fw_rss_cos_lb_ctx[ctx_idx] =
                        le16_to_cpu(resp->rss_cos_lb_ctx_id);
        mutex_unlock(&bp->hwrm_cmd_lock);
 
@@ -3412,16 +3498,31 @@ static int bnxt_hwrm_vnic_cfg(struct bnxt *bp, u16 vnic_id)
        u16 def_vlan = 0;
 
        bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_VNIC_CFG, -1, -1);
+
+       req.enables = cpu_to_le32(VNIC_CFG_REQ_ENABLES_DFLT_RING_GRP);
        /* Only RSS support for now TBD: COS & LB */
-       req.enables = cpu_to_le32(VNIC_CFG_REQ_ENABLES_DFLT_RING_GRP |
-                                 VNIC_CFG_REQ_ENABLES_RSS_RULE |
-                                 VNIC_CFG_REQ_ENABLES_MRU);
-       req.rss_rule = cpu_to_le16(vnic->fw_rss_cos_lb_ctx);
-       req.cos_rule = cpu_to_le16(0xffff);
+       if (vnic->fw_rss_cos_lb_ctx[0] != INVALID_HW_RING_ID) {
+               req.rss_rule = cpu_to_le16(vnic->fw_rss_cos_lb_ctx[0]);
+               req.enables |= cpu_to_le32(VNIC_CFG_REQ_ENABLES_RSS_RULE |
+                                          VNIC_CFG_REQ_ENABLES_MRU);
+       } else {
+               req.rss_rule = cpu_to_le16(0xffff);
+       }
+
+       if (BNXT_CHIP_TYPE_NITRO_A0(bp) &&
+           (vnic->fw_rss_cos_lb_ctx[0] != INVALID_HW_RING_ID)) {
+               req.cos_rule = cpu_to_le16(vnic->fw_rss_cos_lb_ctx[1]);
+               req.enables |= cpu_to_le32(VNIC_CFG_REQ_ENABLES_COS_RULE);
+       } else {
+               req.cos_rule = cpu_to_le16(0xffff);
+       }
+
        if (vnic->flags & BNXT_VNIC_RSS_FLAG)
                ring = 0;
        else if (vnic->flags & BNXT_VNIC_RFS_FLAG)
                ring = vnic_id - 1;
+       else if ((vnic_id == 1) && BNXT_CHIP_TYPE_NITRO_A0(bp))
+               ring = bp->rx_nr_rings - 1;
 
        grp_idx = bp->rx_ring[ring].bnapi->index;
        req.vnic_id = cpu_to_le16(vnic->fw_vnic_id);
@@ -3489,7 +3590,8 @@ static int bnxt_hwrm_vnic_alloc(struct bnxt *bp, u16 vnic_id,
                                        bp->grp_info[grp_idx].fw_grp_id;
        }
 
-       bp->vnic_info[vnic_id].fw_rss_cos_lb_ctx = INVALID_HW_RING_ID;
+       bp->vnic_info[vnic_id].fw_rss_cos_lb_ctx[0] = INVALID_HW_RING_ID;
+       bp->vnic_info[vnic_id].fw_rss_cos_lb_ctx[1] = INVALID_HW_RING_ID;
        if (vnic_id == 0)
                req.flags = cpu_to_le32(VNIC_ALLOC_REQ_FLAGS_DEFAULT);
 
@@ -3922,6 +4024,9 @@ static int bnxt_hwrm_stat_ctx_free(struct bnxt *bp)
        if (!bp->bnapi)
                return 0;
 
+       if (BNXT_CHIP_TYPE_NITRO_A0(bp))
+               return 0;
+
        bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_STAT_CTX_FREE, -1, -1);
 
        mutex_lock(&bp->hwrm_cmd_lock);
@@ -3950,6 +4055,9 @@ static int bnxt_hwrm_stat_ctx_alloc(struct bnxt *bp)
        struct hwrm_stat_ctx_alloc_input req = {0};
        struct hwrm_stat_ctx_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
+       if (BNXT_CHIP_TYPE_NITRO_A0(bp))
+               return 0;
+
        bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_STAT_CTX_ALLOC, -1, -1);
 
        req.update_period_ms = cpu_to_le32(bp->stats_coal_ticks / 1000);
@@ -4163,6 +4271,9 @@ static int bnxt_hwrm_ver_get(struct bnxt *bp)
                bp->hwrm_max_req_len = le16_to_cpu(resp->max_req_win_len);
 
        bp->chip_num = le16_to_cpu(resp->chip_num);
+       if (bp->chip_num == CHIP_NUM_58700 && !resp->chip_rev &&
+           !resp->chip_metal)
+               bp->flags |= BNXT_FLAG_CHIP_NITRO_A0;
 
 hwrm_ver_get_exit:
        mutex_unlock(&bp->hwrm_cmd_lock);
@@ -4252,7 +4363,7 @@ static int bnxt_setup_vnic(struct bnxt *bp, u16 vnic_id)
        int rc;
 
        /* allocate context for vnic */
-       rc = bnxt_hwrm_vnic_ctx_alloc(bp, vnic_id);
+       rc = bnxt_hwrm_vnic_ctx_alloc(bp, vnic_id, 0);
        if (rc) {
                netdev_err(bp->dev, "hwrm vnic %d alloc failure rc: %x\n",
                           vnic_id, rc);
@@ -4260,6 +4371,16 @@ static int bnxt_setup_vnic(struct bnxt *bp, u16 vnic_id)
        }
        bp->rsscos_nr_ctxs++;
 
+       if (BNXT_CHIP_TYPE_NITRO_A0(bp)) {
+               rc = bnxt_hwrm_vnic_ctx_alloc(bp, vnic_id, 1);
+               if (rc) {
+                       netdev_err(bp->dev, "hwrm vnic %d cos ctx alloc failure rc: %x\n",
+                                  vnic_id, rc);
+                       goto vnic_setup_err;
+               }
+               bp->rsscos_nr_ctxs++;
+       }
+
        /* configure default vnic, ring grp */
        rc = bnxt_hwrm_vnic_cfg(bp, vnic_id);
        if (rc) {
@@ -4327,6 +4448,26 @@ static bool bnxt_promisc_ok(struct bnxt *bp)
        return true;
 }
 
+static int bnxt_setup_nitroa0_vnic(struct bnxt *bp)
+{
+       unsigned int rc = 0;
+
+       rc = bnxt_hwrm_vnic_alloc(bp, 1, bp->rx_nr_rings - 1, 1);
+       if (rc) {
+               netdev_err(bp->dev, "Cannot allocate special vnic for NS2 A0: %x\n",
+                          rc);
+               return rc;
+       }
+
+       rc = bnxt_hwrm_vnic_cfg(bp, 1);
+       if (rc) {
+               netdev_err(bp->dev, "Cannot allocate special vnic for NS2 A0: %x\n",
+                          rc);
+               return rc;
+       }
+       return rc;
+}
+
 static int bnxt_cfg_rx_mode(struct bnxt *);
 static bool bnxt_mc_list_updated(struct bnxt *, u32 *);
 
@@ -4334,6 +4475,7 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
 {
        struct bnxt_vnic_info *vnic = &bp->vnic_info[0];
        int rc = 0;
+       unsigned int rx_nr_rings = bp->rx_nr_rings;
 
        if (irq_re_init) {
                rc = bnxt_hwrm_stat_ctx_alloc(bp);
@@ -4356,8 +4498,11 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
                goto err_out;
        }
 
+       if (BNXT_CHIP_TYPE_NITRO_A0(bp))
+               rx_nr_rings--;
+
        /* default vnic 0 */
-       rc = bnxt_hwrm_vnic_alloc(bp, 0, 0, bp->rx_nr_rings);
+       rc = bnxt_hwrm_vnic_alloc(bp, 0, 0, rx_nr_rings);
        if (rc) {
                netdev_err(bp->dev, "hwrm vnic alloc failure rc: %x\n", rc);
                goto err_out;
@@ -4412,7 +4557,14 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
        rc = bnxt_hwrm_set_coal(bp);
        if (rc)
                netdev_warn(bp->dev, "HWRM set coalescing failure rc: %x\n",
-                           rc);
+                               rc);
+
+       if (BNXT_CHIP_TYPE_NITRO_A0(bp)) {
+               rc = bnxt_setup_nitroa0_vnic(bp);
+               if (rc)
+                       netdev_err(bp->dev, "Special vnic setup failure for NS2 A0 rc: %x\n",
+                                  rc);
+       }
 
        if (BNXT_VF(bp)) {
                bnxt_hwrm_func_qcfg(bp);
@@ -4721,14 +4873,23 @@ static void bnxt_del_napi(struct bnxt *bp)
 static void bnxt_init_napi(struct bnxt *bp)
 {
        int i;
+       unsigned int cp_nr_rings = bp->cp_nr_rings;
        struct bnxt_napi *bnapi;
 
        if (bp->flags & BNXT_FLAG_USING_MSIX) {
-               for (i = 0; i < bp->cp_nr_rings; i++) {
+               if (BNXT_CHIP_TYPE_NITRO_A0(bp))
+                       cp_nr_rings--;
+               for (i = 0; i < cp_nr_rings; i++) {
                        bnapi = bp->bnapi[i];
                        netif_napi_add(bp->dev, &bnapi->napi,
                                       bnxt_poll, 64);
                }
+               if (BNXT_CHIP_TYPE_NITRO_A0(bp)) {
+                       bnapi = bp->bnapi[cp_nr_rings];
+                       netif_napi_add(bp->dev, &bnapi->napi,
+                                      bnxt_poll_nitroa0, 64);
+                       napi_hash_add(&bnapi->napi);
+               }
        } else {
                bnapi = bp->bnapi[0];
                netif_napi_add(bp->dev, &bnapi->napi, bnxt_poll, 64);
@@ -4769,9 +4930,7 @@ static void bnxt_tx_disable(struct bnxt *bp)
                for (i = 0; i < bp->tx_nr_rings; i++) {
                        txr = &bp->tx_ring[i];
                        txq = netdev_get_tx_queue(bp->dev, i);
-                       __netif_tx_lock(txq, smp_processor_id());
                        txr->dev_state = BNXT_DEV_STATE_CLOSING;
-                       __netif_tx_unlock(txq);
                }
        }
        /* Stop all TX queues */
@@ -5681,7 +5840,7 @@ static int bnxt_set_features(struct net_device *dev, netdev_features_t features)
        bool update_tpa = false;
 
        flags &= ~BNXT_FLAG_ALL_CONFIG_FEATS;
-       if ((features & NETIF_F_GRO) && (bp->pdev->revision > 0))
+       if ((features & NETIF_F_GRO) && !BNXT_CHIP_TYPE_NITRO_A0(bp))
                flags |= BNXT_FLAG_GRO;
        if (features & NETIF_F_LRO)
                flags |= BNXT_FLAG_LRO;
@@ -6488,7 +6647,10 @@ static void _bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx,
                *max_cp = min_t(int, *max_cp, bp->pf.max_stat_ctxs);
                max_ring_grps = bp->pf.max_hw_ring_grps;
        }
-
+       if (BNXT_CHIP_TYPE_NITRO_A0(bp) && BNXT_PF(bp)) {
+               *max_cp -= 1;
+               *max_rx -= 2;
+       }
        if (bp->flags & BNXT_FLAG_AGG_RINGS)
                *max_rx >>= 1;
        *max_rx = min_t(int, *max_rx, max_ring_grps);
@@ -6524,6 +6686,10 @@ static int bnxt_set_dflt_rings(struct bnxt *bp)
        bp->cp_nr_rings = sh ? max_t(int, bp->tx_nr_rings, bp->rx_nr_rings) :
                               bp->tx_nr_rings + bp->rx_nr_rings;
        bp->num_stat_ctxs = bp->cp_nr_rings;
+       if (BNXT_CHIP_TYPE_NITRO_A0(bp)) {
+               bp->rx_nr_rings++;
+               bp->cp_nr_rings++;
+       }
        return rc;
 }
 
@@ -6550,6 +6716,9 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
        struct bnxt *bp;
        int rc, max_irqs;
 
+       if (pdev->device == 0x16cd && pci_is_bridge(pdev))
+               return -ENODEV;
+
        if (version_printed++ == 0)
                pr_info("%s", version);
 
@@ -6576,13 +6745,25 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 
        pci_set_drvdata(pdev, dev);
 
+       rc = bnxt_alloc_hwrm_resources(bp);
+       if (rc)
+               goto init_err;
+
+       mutex_init(&bp->hwrm_cmd_lock);
+       rc = bnxt_hwrm_ver_get(bp);
+       if (rc)
+               goto init_err;
+
        dev->hw_features = NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_SG |
                           NETIF_F_TSO | NETIF_F_TSO6 |
                           NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE |
                           NETIF_F_GSO_IPXIP4 |
                           NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM |
                           NETIF_F_GSO_PARTIAL | NETIF_F_RXHASH |
-                          NETIF_F_RXCSUM | NETIF_F_LRO | NETIF_F_GRO;
+                          NETIF_F_RXCSUM | NETIF_F_GRO;
+
+       if (!BNXT_CHIP_TYPE_NITRO_A0(bp))
+               dev->hw_features |= NETIF_F_LRO;
 
        dev->hw_enc_features =
                        NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_SG |
@@ -6601,15 +6782,6 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 #ifdef CONFIG_BNXT_SRIOV
        init_waitqueue_head(&bp->sriov_cfg_wait);
 #endif
-       rc = bnxt_alloc_hwrm_resources(bp);
-       if (rc)
-               goto init_err;
-
-       mutex_init(&bp->hwrm_cmd_lock);
-       rc = bnxt_hwrm_ver_get(bp);
-       if (rc)
-               goto init_err;
-
        bp->gro_func = bnxt_gro_func_5730x;
        if (BNXT_CHIP_NUM_57X1X(bp->chip_num))
                bp->gro_func = bnxt_gro_func_5731x;
@@ -6647,7 +6819,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
 #endif
        bnxt_set_dflt_rings(bp);
 
-       if (BNXT_PF(bp)) {
+       if (BNXT_PF(bp) && !BNXT_CHIP_TYPE_NITRO_A0(bp)) {
                dev->hw_features |= NETIF_F_NTUPLE;
                if (bnxt_rfs_capable(bp)) {
                        bp->flags |= BNXT_FLAG_RFS;
index 2313e37..5307a2e 100644 (file)
@@ -695,7 +695,8 @@ struct bnxt_ring_grp_info {
 
 struct bnxt_vnic_info {
        u16             fw_vnic_id; /* returned by Chimp during alloc */
-       u16             fw_rss_cos_lb_ctx;
+#define BNXT_MAX_CTX_PER_VNIC  2
+       u16             fw_rss_cos_lb_ctx[BNXT_MAX_CTX_PER_VNIC];
        u16             fw_l2_ctx_id;
 #define BNXT_MAX_UC_ADDRS      4
        __le64          fw_l2_filter_id[BNXT_MAX_UC_ADDRS];
@@ -893,6 +894,7 @@ struct bnxt {
 #define CHIP_NUM_57301         0x16c8
 #define CHIP_NUM_57302         0x16c9
 #define CHIP_NUM_57304         0x16ca
+#define CHIP_NUM_58700         0x16cd
 #define CHIP_NUM_57402         0x16d0
 #define CHIP_NUM_57404         0x16d1
 #define CHIP_NUM_57406         0x16d2
@@ -954,6 +956,7 @@ struct bnxt {
        #define BNXT_FLAG_SHARED_RINGS  0x200
        #define BNXT_FLAG_PORT_STATS    0x400
        #define BNXT_FLAG_EEE_CAP       0x1000
+       #define BNXT_FLAG_CHIP_NITRO_A0 0x1000000
 
        #define BNXT_FLAG_ALL_CONFIG_FEATS (BNXT_FLAG_TPA |             \
                                            BNXT_FLAG_RFS |             \
@@ -963,6 +966,7 @@ struct bnxt {
 #define BNXT_VF(bp)            ((bp)->flags & BNXT_FLAG_VF)
 #define BNXT_NPAR(bp)          ((bp)->port_partition_type)
 #define BNXT_SINGLE_PF(bp)     (BNXT_PF(bp) && !BNXT_NPAR(bp))
+#define BNXT_CHIP_TYPE_NITRO_A0(bp) ((bp)->flags & BNXT_FLAG_CHIP_NITRO_A0)
 
        struct bnxt_napi        **bnapi;
 
index 0f7dd86..b83e174 100644 (file)
@@ -362,9 +362,13 @@ static void bnxt_get_channels(struct net_device *dev,
        channel->max_other = 0;
        if (bp->flags & BNXT_FLAG_SHARED_RINGS) {
                channel->combined_count = bp->rx_nr_rings;
+               if (BNXT_CHIP_TYPE_NITRO_A0(bp))
+                       channel->combined_count--;
        } else {
-               channel->rx_count = bp->rx_nr_rings;
-               channel->tx_count = bp->tx_nr_rings_per_tc;
+               if (!BNXT_CHIP_TYPE_NITRO_A0(bp)) {
+                       channel->rx_count = bp->rx_nr_rings;
+                       channel->tx_count = bp->tx_nr_rings_per_tc;
+               }
        }
 }
 
@@ -387,6 +391,10 @@ static int bnxt_set_channels(struct net_device *dev,
            (channel->rx_count || channel->tx_count))
                return -EINVAL;
 
+       if (BNXT_CHIP_TYPE_NITRO_A0(bp) && (channel->rx_count ||
+                                           channel->tx_count))
+               return -EINVAL;
+
        if (channel->combined_count)
                sh = true;
 
@@ -1684,7 +1692,7 @@ static int bnxt_get_module_eeprom(struct net_device *dev,
 {
        struct bnxt *bp = netdev_priv(dev);
        u16  start = eeprom->offset, length = eeprom->len;
-       int rc;
+       int rc = 0;
 
        memset(data, 0, eeprom->len);
 
index 76ed6df..8d4f849 100644 (file)
@@ -450,6 +450,30 @@ static inline void bcmgenet_rdma_ring_writel(struct bcmgenet_priv *priv,
                        genet_dma_ring_regs[r]);
 }
 
+static int bcmgenet_get_settings(struct net_device *dev,
+                                struct ethtool_cmd *cmd)
+{
+       if (!netif_running(dev))
+               return -EINVAL;
+
+       if (!dev->phydev)
+               return -ENODEV;
+
+       return phy_ethtool_gset(dev->phydev, cmd);
+}
+
+static int bcmgenet_set_settings(struct net_device *dev,
+                                struct ethtool_cmd *cmd)
+{
+       if (!netif_running(dev))
+               return -EINVAL;
+
+       if (!dev->phydev)
+               return -ENODEV;
+
+       return phy_ethtool_sset(dev->phydev, cmd);
+}
+
 static int bcmgenet_set_rx_csum(struct net_device *dev,
                                netdev_features_t wanted)
 {
@@ -953,6 +977,8 @@ static struct ethtool_ops bcmgenet_ethtool_ops = {
        .get_strings            = bcmgenet_get_strings,
        .get_sset_count         = bcmgenet_get_sset_count,
        .get_ethtool_stats      = bcmgenet_get_ethtool_stats,
+       .get_settings           = bcmgenet_get_settings,
+       .set_settings           = bcmgenet_set_settings,
        .get_drvinfo            = bcmgenet_get_drvinfo,
        .get_link               = ethtool_op_get_link,
        .get_msglevel           = bcmgenet_get_msglevel,
@@ -964,8 +990,6 @@ static struct ethtool_ops bcmgenet_ethtool_ops = {
        .nway_reset             = bcmgenet_nway_reset,
        .get_coalesce           = bcmgenet_get_coalesce,
        .set_coalesce           = bcmgenet_set_coalesce,
-       .get_link_ksettings = phy_ethtool_get_link_ksettings,
-       .set_link_ksettings = phy_ethtool_set_link_ksettings,
 };
 
 /* Power down the unimac, based on mode. */
index 4edb98c..c044667 100644 (file)
@@ -192,7 +192,6 @@ MODULE_PARM_DESC(buffer_size, "DMA buffer allocation size");
  * @napi:      NAPI structure
  * @msg_enable:        device state flags
  * @lock:      device lock
- * @phy:       attached PHY
  * @mdio:      MDIO bus for PHY access
  * @phy_id:    address of attached PHY
  */
@@ -219,7 +218,6 @@ struct ethoc {
 
        spinlock_t lock;
 
-       struct phy_device *phy;
        struct mii_bus *mdio;
        struct clk *clk;
        s8 phy_id;
@@ -694,7 +692,6 @@ static int ethoc_mdio_probe(struct net_device *dev)
                return err;
        }
 
-       priv->phy = phy;
        phy->advertising &= ~(ADVERTISED_1000baseT_Full |
                              ADVERTISED_1000baseT_Half);
        phy->supported &= ~(SUPPORTED_1000baseT_Full |
@@ -724,7 +721,7 @@ static int ethoc_open(struct net_device *dev)
                netif_start_queue(dev);
        }
 
-       phy_start(priv->phy);
+       phy_start(dev->phydev);
        napi_enable(&priv->napi);
 
        if (netif_msg_ifup(priv)) {
@@ -741,8 +738,8 @@ static int ethoc_stop(struct net_device *dev)
 
        napi_disable(&priv->napi);
 
-       if (priv->phy)
-               phy_stop(priv->phy);
+       if (dev->phydev)
+               phy_stop(dev->phydev);
 
        ethoc_disable_rx_and_tx(priv);
        free_irq(dev->irq, dev);
@@ -770,7 +767,7 @@ static int ethoc_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
                if (!phy)
                        return -ENODEV;
        } else {
-               phy = priv->phy;
+               phy = dev->phydev;
        }
 
        return phy_mii_ioctl(phy, ifr, cmd);
@@ -860,6 +857,11 @@ static netdev_tx_t ethoc_start_xmit(struct sk_buff *skb, struct net_device *dev)
        unsigned int entry;
        void *dest;
 
+       if (skb_put_padto(skb, ETHOC_ZLEN)) {
+               dev->stats.tx_errors++;
+               goto out_no_free;
+       }
+
        if (unlikely(skb->len > ETHOC_BUFSIZ)) {
                dev->stats.tx_errors++;
                goto out;
@@ -894,31 +896,10 @@ static netdev_tx_t ethoc_start_xmit(struct sk_buff *skb, struct net_device *dev)
        skb_tx_timestamp(skb);
 out:
        dev_kfree_skb(skb);
+out_no_free:
        return NETDEV_TX_OK;
 }
 
-static int ethoc_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct ethoc *priv = netdev_priv(dev);
-       struct phy_device *phydev = priv->phy;
-
-       if (!phydev)
-               return -EOPNOTSUPP;
-
-       return phy_ethtool_gset(phydev, cmd);
-}
-
-static int ethoc_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct ethoc *priv = netdev_priv(dev);
-       struct phy_device *phydev = priv->phy;
-
-       if (!phydev)
-               return -EOPNOTSUPP;
-
-       return phy_ethtool_sset(phydev, cmd);
-}
-
 static int ethoc_get_regs_len(struct net_device *netdev)
 {
        return ETH_END;
@@ -983,14 +964,14 @@ static int ethoc_set_ringparam(struct net_device *dev,
 }
 
 const struct ethtool_ops ethoc_ethtool_ops = {
-       .get_settings = ethoc_get_settings,
-       .set_settings = ethoc_set_settings,
        .get_regs_len = ethoc_get_regs_len,
        .get_regs = ethoc_get_regs,
        .get_link = ethtool_op_get_link,
        .get_ringparam = ethoc_get_ringparam,
        .set_ringparam = ethoc_set_ringparam,
        .get_ts_info = ethtool_op_get_ts_info,
+       .get_link_ksettings = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings = phy_ethtool_set_link_ksettings,
 };
 
 static const struct net_device_ops ethoc_netdev_ops = {
@@ -1086,7 +1067,7 @@ static int ethoc_probe(struct platform_device *pdev)
        if (!priv->iobase) {
                dev_err(&pdev->dev, "cannot remap I/O memory space\n");
                ret = -ENXIO;
-               goto error;
+               goto free;
        }
 
        if (netdev->mem_end) {
@@ -1095,7 +1076,7 @@ static int ethoc_probe(struct platform_device *pdev)
                if (!priv->membase) {
                        dev_err(&pdev->dev, "cannot remap memory space\n");
                        ret = -ENXIO;
-                       goto error;
+                       goto free;
                }
        } else {
                /* Allocate buffer memory */
@@ -1106,7 +1087,7 @@ static int ethoc_probe(struct platform_device *pdev)
                        dev_err(&pdev->dev, "cannot allocate %dB buffer\n",
                                buffer_size);
                        ret = -ENOMEM;
-                       goto error;
+                       goto free;
                }
                netdev->mem_end = netdev->mem_start + buffer_size;
                priv->dma_alloc = buffer_size;
@@ -1120,7 +1101,7 @@ static int ethoc_probe(struct platform_device *pdev)
                128, (netdev->mem_end - netdev->mem_start + 1) / ETHOC_BUFSIZ);
        if (num_bd < 4) {
                ret = -ENODEV;
-               goto error;
+               goto free;
        }
        priv->num_bd = num_bd;
        /* num_tx must be a power of two */
@@ -1133,7 +1114,7 @@ static int ethoc_probe(struct platform_device *pdev)
        priv->vma = devm_kzalloc(&pdev->dev, num_bd*sizeof(void *), GFP_KERNEL);
        if (!priv->vma) {
                ret = -ENOMEM;
-               goto error;
+               goto free;
        }
 
        /* Allow the platform setup code to pass in a MAC address. */
@@ -1261,8 +1242,7 @@ static int ethoc_remove(struct platform_device *pdev)
 
        if (netdev) {
                netif_napi_del(&priv->napi);
-               phy_disconnect(priv->phy);
-               priv->phy = NULL;
+               phy_disconnect(netdev->phydev);
 
                if (priv->mdio) {
                        mdiobus_unregister(priv->mdio);
index 06f0317..f928e6f 100644 (file)
 
 #define DRV_NAME                       "nps_mgt_enet"
 
+static inline bool nps_enet_is_tx_pending(struct nps_enet_priv *priv)
+{
+       u32 tx_ctrl_value = nps_enet_reg_get(priv, NPS_ENET_REG_TX_CTL);
+       u32 tx_ctrl_ct = (tx_ctrl_value & TX_CTL_CT_MASK) >> TX_CTL_CT_SHIFT;
+
+       return (!tx_ctrl_ct && priv->tx_skb);
+}
+
 static void nps_enet_clean_rx_fifo(struct net_device *ndev, u32 frame_len)
 {
        struct nps_enet_priv *priv = netdev_priv(ndev);
@@ -46,16 +54,17 @@ static void nps_enet_read_rx_fifo(struct net_device *ndev,
        if (dst_is_aligned) {
                ioread32_rep(priv->regs_base + NPS_ENET_REG_RX_BUF, reg, len);
                reg += len;
-       }
-       else { /* !dst_is_aligned */
+       } else { /* !dst_is_aligned */
                for (i = 0; i < len; i++, reg++) {
                        u32 buf = nps_enet_reg_get(priv, NPS_ENET_REG_RX_BUF);
+
                        put_unaligned_be32(buf, reg);
                }
        }
        /* copy last bytes (if any) */
        if (last) {
                u32 buf;
+
                ioread32_rep(priv->regs_base + NPS_ENET_REG_RX_BUF, &buf, 1);
                memcpy((u8 *)reg, &buf, last);
        }
@@ -140,12 +149,11 @@ static void nps_enet_tx_handler(struct net_device *ndev)
 {
        struct nps_enet_priv *priv = netdev_priv(ndev);
        u32 tx_ctrl_value = nps_enet_reg_get(priv, NPS_ENET_REG_TX_CTL);
-       u32 tx_ctrl_ct = (tx_ctrl_value & TX_CTL_CT_MASK) >> TX_CTL_CT_SHIFT;
        u32 tx_ctrl_et = (tx_ctrl_value & TX_CTL_ET_MASK) >> TX_CTL_ET_SHIFT;
        u32 tx_ctrl_nt = (tx_ctrl_value & TX_CTL_NT_MASK) >> TX_CTL_NT_SHIFT;
 
        /* Check if we got TX */
-       if (!priv->tx_skb || tx_ctrl_ct)
+       if (!nps_enet_is_tx_pending(priv))
                return;
 
        /* Ack Tx ctrl register */
@@ -183,9 +191,6 @@ static int nps_enet_poll(struct napi_struct *napi, int budget)
        work_done = nps_enet_rx_handler(ndev);
        if (work_done < budget) {
                u32 buf_int_enable_value = 0;
-               u32 tx_ctrl_value = nps_enet_reg_get(priv, NPS_ENET_REG_TX_CTL);
-               u32 tx_ctrl_ct =
-                       (tx_ctrl_value & TX_CTL_CT_MASK) >> TX_CTL_CT_SHIFT;
 
                napi_complete(napi);
 
@@ -204,8 +209,7 @@ static int nps_enet_poll(struct napi_struct *napi, int budget)
                 * the two code lines below will solve this situation by
                 * re-adding ourselves to the poll list.
                 */
-
-               if (priv->tx_skb && !tx_ctrl_ct) {
+               if (nps_enet_is_tx_pending(priv)) {
                        nps_enet_reg_set(priv, NPS_ENET_REG_BUF_INT_ENABLE, 0);
                        napi_reschedule(napi);
                }
@@ -230,11 +234,9 @@ static irqreturn_t nps_enet_irq_handler(s32 irq, void *dev_instance)
        struct net_device *ndev = dev_instance;
        struct nps_enet_priv *priv = netdev_priv(ndev);
        u32 rx_ctrl_value = nps_enet_reg_get(priv, NPS_ENET_REG_RX_CTL);
-       u32 tx_ctrl_value = nps_enet_reg_get(priv, NPS_ENET_REG_TX_CTL);
-       u32 tx_ctrl_ct = (tx_ctrl_value & TX_CTL_CT_MASK) >> TX_CTL_CT_SHIFT;
        u32 rx_ctrl_cr = (rx_ctrl_value & RX_CTL_CR_MASK) >> RX_CTL_CR_SHIFT;
 
-       if ((!tx_ctrl_ct && priv->tx_skb) || rx_ctrl_cr)
+       if (nps_enet_is_tx_pending(priv) || rx_ctrl_cr)
                if (likely(napi_schedule_prep(&priv->napi))) {
                        nps_enet_reg_set(priv, NPS_ENET_REG_BUF_INT_ENABLE, 0);
                        __napi_schedule(&priv->napi);
@@ -285,6 +287,7 @@ static void nps_enet_hw_reset(struct net_device *ndev)
        ge_rst_value |= NPS_ENET_ENABLE << RST_GMAC_0_SHIFT;
        nps_enet_reg_set(priv, NPS_ENET_REG_GE_RST, ge_rst_value);
        usleep_range(10, 20);
+       ge_rst_value = 0;
        nps_enet_reg_set(priv, NPS_ENET_REG_GE_RST, ge_rst_value);
 
        /* Tx fifo reset sequence */
@@ -459,7 +462,6 @@ static void nps_enet_set_rx_mode(struct net_device *ndev)
                         | NPS_ENET_ENABLE << CFG_2_DISK_DA_SHIFT;
                ge_mac_cfg_2_value = (ge_mac_cfg_2_value & ~CFG_2_DISK_MC_MASK)
                         | NPS_ENET_ENABLE << CFG_2_DISK_MC_SHIFT;
-
        }
 
        nps_enet_reg_set(priv, NPS_ENET_REG_GE_MAC_CFG_2, ge_mac_cfg_2_value);
index e7cf313..36361f8 100644 (file)
@@ -31,6 +31,7 @@
 #include <linux/phy.h>
 #include <linux/platform_device.h>
 #include <net/ip.h>
+#include <net/ncsi.h>
 
 #include "ftgmac100.h"
 
@@ -68,10 +69,14 @@ struct ftgmac100 {
 
        struct net_device *netdev;
        struct device *dev;
+       struct ncsi_dev *ndev;
        struct napi_struct napi;
 
        struct mii_bus *mii_bus;
        int old_speed;
+       int int_mask_all;
+       bool use_ncsi;
+       bool enabled;
 };
 
 static int ftgmac100_alloc_rx_page(struct ftgmac100 *priv,
@@ -80,14 +85,6 @@ static int ftgmac100_alloc_rx_page(struct ftgmac100 *priv,
 /******************************************************************************
  * internal functions (hardware register access)
  *****************************************************************************/
-#define INT_MASK_ALL_ENABLED   (FTGMAC100_INT_RPKT_LOST        | \
-                                FTGMAC100_INT_XPKT_ETH         | \
-                                FTGMAC100_INT_XPKT_LOST        | \
-                                FTGMAC100_INT_AHB_ERR          | \
-                                FTGMAC100_INT_PHYSTS_CHG       | \
-                                FTGMAC100_INT_RPKT_BUF         | \
-                                FTGMAC100_INT_NO_RXBUF)
-
 static void ftgmac100_set_rx_ring_base(struct ftgmac100 *priv, dma_addr_t addr)
 {
        iowrite32(addr, priv->base + FTGMAC100_OFFSET_RXR_BADR);
@@ -141,6 +138,55 @@ static void ftgmac100_set_mac(struct ftgmac100 *priv, const unsigned char *mac)
        iowrite32(laddr, priv->base + FTGMAC100_OFFSET_MAC_LADR);
 }
 
+static void ftgmac100_setup_mac(struct ftgmac100 *priv)
+{
+       u8 mac[ETH_ALEN];
+       unsigned int m;
+       unsigned int l;
+       void *addr;
+
+       addr = device_get_mac_address(priv->dev, mac, ETH_ALEN);
+       if (addr) {
+               ether_addr_copy(priv->netdev->dev_addr, mac);
+               dev_info(priv->dev, "Read MAC address %pM from device tree\n",
+                        mac);
+               return;
+       }
+
+       m = ioread32(priv->base + FTGMAC100_OFFSET_MAC_MADR);
+       l = ioread32(priv->base + FTGMAC100_OFFSET_MAC_LADR);
+
+       mac[0] = (m >> 8) & 0xff;
+       mac[1] = m & 0xff;
+       mac[2] = (l >> 24) & 0xff;
+       mac[3] = (l >> 16) & 0xff;
+       mac[4] = (l >> 8) & 0xff;
+       mac[5] = l & 0xff;
+
+       if (is_valid_ether_addr(mac)) {
+               ether_addr_copy(priv->netdev->dev_addr, mac);
+               dev_info(priv->dev, "Read MAC address %pM from chip\n", mac);
+       } else {
+               eth_hw_addr_random(priv->netdev);
+               dev_info(priv->dev, "Generated random MAC address %pM\n",
+                        priv->netdev->dev_addr);
+       }
+}
+
+static int ftgmac100_set_mac_addr(struct net_device *dev, void *p)
+{
+       int ret;
+
+       ret = eth_prepare_mac_addr_change(dev, p);
+       if (ret < 0)
+               return ret;
+
+       eth_commit_mac_addr_change(dev, p);
+       ftgmac100_set_mac(netdev_priv(dev), dev->dev_addr);
+
+       return 0;
+}
+
 static void ftgmac100_init_hw(struct ftgmac100 *priv)
 {
        /* setup ring buffer base registers */
@@ -952,7 +998,10 @@ static irqreturn_t ftgmac100_interrupt(int irq, void *dev_id)
        struct net_device *netdev = dev_id;
        struct ftgmac100 *priv = netdev_priv(netdev);
 
-       if (likely(netif_running(netdev))) {
+       /* When running in NCSI mode, the interface should be ready for
+        * receiving or transmitting NCSI packets before it's opened.
+        */
+       if (likely(priv->use_ncsi || netif_running(netdev))) {
                /* Disable interrupts for polling */
                iowrite32(0, priv->base + FTGMAC100_OFFSET_IER);
                napi_schedule(&priv->napi);
@@ -1005,8 +1054,9 @@ static int ftgmac100_poll(struct napi_struct *napi, int budget)
                ftgmac100_tx_complete(priv);
        }
 
-       if (status & (FTGMAC100_INT_NO_RXBUF | FTGMAC100_INT_RPKT_LOST |
-                     FTGMAC100_INT_AHB_ERR | FTGMAC100_INT_PHYSTS_CHG)) {
+       if (status & priv->int_mask_all & (FTGMAC100_INT_NO_RXBUF |
+                       FTGMAC100_INT_RPKT_LOST | FTGMAC100_INT_AHB_ERR |
+                       FTGMAC100_INT_PHYSTS_CHG)) {
                if (net_ratelimit())
                        netdev_info(netdev, "[ISR] = 0x%x: %s%s%s%s\n", status,
                                    status & FTGMAC100_INT_NO_RXBUF ? "NO_RXBUF " : "",
@@ -1029,7 +1079,8 @@ static int ftgmac100_poll(struct napi_struct *napi, int budget)
                napi_complete(napi);
 
                /* enable all interrupts */
-               iowrite32(INT_MASK_ALL_ENABLED, priv->base + FTGMAC100_OFFSET_IER);
+               iowrite32(priv->int_mask_all,
+                         priv->base + FTGMAC100_OFFSET_IER);
        }
 
        return rx;
@@ -1065,17 +1116,33 @@ static int ftgmac100_open(struct net_device *netdev)
                goto err_hw;
 
        ftgmac100_init_hw(priv);
-       ftgmac100_start_hw(priv, 10);
-
-       phy_start(netdev->phydev);
+       ftgmac100_start_hw(priv, priv->use_ncsi ? 100 : 10);
+       if (netdev->phydev)
+               phy_start(netdev->phydev);
+       else if (priv->use_ncsi)
+               netif_carrier_on(netdev);
 
        napi_enable(&priv->napi);
        netif_start_queue(netdev);
 
        /* enable all interrupts */
-       iowrite32(INT_MASK_ALL_ENABLED, priv->base + FTGMAC100_OFFSET_IER);
+       iowrite32(priv->int_mask_all, priv->base + FTGMAC100_OFFSET_IER);
+
+       /* Start the NCSI device */
+       if (priv->use_ncsi) {
+               err = ncsi_start_dev(priv->ndev);
+               if (err)
+                       goto err_ncsi;
+       }
+
+       priv->enabled = true;
+
        return 0;
 
+err_ncsi:
+       napi_disable(&priv->napi);
+       netif_stop_queue(netdev);
+       iowrite32(0, priv->base + FTGMAC100_OFFSET_IER);
 err_hw:
        free_irq(priv->irq, netdev);
 err_irq:
@@ -1088,12 +1155,17 @@ static int ftgmac100_stop(struct net_device *netdev)
 {
        struct ftgmac100 *priv = netdev_priv(netdev);
 
+       if (!priv->enabled)
+               return 0;
+
        /* disable all interrupts */
+       priv->enabled = false;
        iowrite32(0, priv->base + FTGMAC100_OFFSET_IER);
 
        netif_stop_queue(netdev);
        napi_disable(&priv->napi);
-       phy_stop(netdev->phydev);
+       if (netdev->phydev)
+               phy_stop(netdev->phydev);
 
        ftgmac100_stop_hw(priv);
        free_irq(priv->irq, netdev);
@@ -1134,6 +1206,9 @@ static int ftgmac100_hard_start_xmit(struct sk_buff *skb,
 /* optional */
 static int ftgmac100_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
 {
+       if (!netdev->phydev)
+               return -ENXIO;
+
        return phy_mii_ioctl(netdev->phydev, ifr, cmd);
 }
 
@@ -1141,11 +1216,74 @@ static const struct net_device_ops ftgmac100_netdev_ops = {
        .ndo_open               = ftgmac100_open,
        .ndo_stop               = ftgmac100_stop,
        .ndo_start_xmit         = ftgmac100_hard_start_xmit,
-       .ndo_set_mac_address    = eth_mac_addr,
+       .ndo_set_mac_address    = ftgmac100_set_mac_addr,
        .ndo_validate_addr      = eth_validate_addr,
        .ndo_do_ioctl           = ftgmac100_do_ioctl,
 };
 
+static int ftgmac100_setup_mdio(struct net_device *netdev)
+{
+       struct ftgmac100 *priv = netdev_priv(netdev);
+       struct platform_device *pdev = to_platform_device(priv->dev);
+       int i, err = 0;
+
+       /* initialize mdio bus */
+       priv->mii_bus = mdiobus_alloc();
+       if (!priv->mii_bus)
+               return -EIO;
+
+       priv->mii_bus->name = "ftgmac100_mdio";
+       snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%d",
+                pdev->name, pdev->id);
+       priv->mii_bus->priv = priv->netdev;
+       priv->mii_bus->read = ftgmac100_mdiobus_read;
+       priv->mii_bus->write = ftgmac100_mdiobus_write;
+
+       for (i = 0; i < PHY_MAX_ADDR; i++)
+               priv->mii_bus->irq[i] = PHY_POLL;
+
+       err = mdiobus_register(priv->mii_bus);
+       if (err) {
+               dev_err(priv->dev, "Cannot register MDIO bus!\n");
+               goto err_register_mdiobus;
+       }
+
+       err = ftgmac100_mii_probe(priv);
+       if (err) {
+               dev_err(priv->dev, "MII Probe failed!\n");
+               goto err_mii_probe;
+       }
+
+       return 0;
+
+err_mii_probe:
+       mdiobus_unregister(priv->mii_bus);
+err_register_mdiobus:
+       mdiobus_free(priv->mii_bus);
+       return err;
+}
+
+static void ftgmac100_destroy_mdio(struct net_device *netdev)
+{
+       struct ftgmac100 *priv = netdev_priv(netdev);
+
+       if (!netdev->phydev)
+               return;
+
+       phy_disconnect(netdev->phydev);
+       mdiobus_unregister(priv->mii_bus);
+       mdiobus_free(priv->mii_bus);
+}
+
+static void ftgmac100_ncsi_handler(struct ncsi_dev *nd)
+{
+       if (unlikely(nd->state != ncsi_dev_state_functional))
+               return;
+
+       netdev_info(nd->dev, "NCSI interface %s\n",
+                   nd->link_up ? "up" : "down");
+}
+
 /******************************************************************************
  * struct platform_driver functions
  *****************************************************************************/
@@ -1155,7 +1293,7 @@ static int ftgmac100_probe(struct platform_device *pdev)
        int irq;
        struct net_device *netdev;
        struct ftgmac100 *priv;
-       int err;
+       int err = 0;
 
        if (!pdev)
                return -ENODEV;
@@ -1179,7 +1317,6 @@ static int ftgmac100_probe(struct platform_device *pdev)
 
        netdev->ethtool_ops = &ftgmac100_ethtool_ops;
        netdev->netdev_ops = &ftgmac100_netdev_ops;
-       netdev->features = NETIF_F_IP_CSUM | NETIF_F_GRO;
 
        platform_set_drvdata(pdev, netdev);
 
@@ -1211,31 +1348,45 @@ static int ftgmac100_probe(struct platform_device *pdev)
 
        priv->irq = irq;
 
-       /* initialize mdio bus */
-       priv->mii_bus = mdiobus_alloc();
-       if (!priv->mii_bus) {
-               err = -EIO;
-               goto err_alloc_mdiobus;
-       }
-
-       priv->mii_bus->name = "ftgmac100_mdio";
-       snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "ftgmac100_mii");
-
-       priv->mii_bus->priv = netdev;
-       priv->mii_bus->read = ftgmac100_mdiobus_read;
-       priv->mii_bus->write = ftgmac100_mdiobus_write;
+       /* MAC address from chip or random one */
+       ftgmac100_setup_mac(priv);
+
+       priv->int_mask_all = (FTGMAC100_INT_RPKT_LOST |
+                             FTGMAC100_INT_XPKT_ETH |
+                             FTGMAC100_INT_XPKT_LOST |
+                             FTGMAC100_INT_AHB_ERR |
+                             FTGMAC100_INT_PHYSTS_CHG |
+                             FTGMAC100_INT_RPKT_BUF |
+                             FTGMAC100_INT_NO_RXBUF);
+       if (pdev->dev.of_node &&
+           of_get_property(pdev->dev.of_node, "use-ncsi", NULL)) {
+               if (!IS_ENABLED(CONFIG_NET_NCSI)) {
+                       dev_err(&pdev->dev, "NCSI stack not enabled\n");
+                       goto err_ncsi_dev;
+               }
 
-       err = mdiobus_register(priv->mii_bus);
-       if (err) {
-               dev_err(&pdev->dev, "Cannot register MDIO bus!\n");
-               goto err_register_mdiobus;
+               dev_info(&pdev->dev, "Using NCSI interface\n");
+               priv->use_ncsi = true;
+               priv->int_mask_all &= ~FTGMAC100_INT_PHYSTS_CHG;
+               priv->ndev = ncsi_register_dev(netdev, ftgmac100_ncsi_handler);
+               if (!priv->ndev)
+                       goto err_ncsi_dev;
+       } else {
+               priv->use_ncsi = false;
+               err = ftgmac100_setup_mdio(netdev);
+               if (err)
+                       goto err_setup_mdio;
        }
 
-       err = ftgmac100_mii_probe(priv);
-       if (err) {
-               dev_err(&pdev->dev, "MII Probe failed!\n");
-               goto err_mii_probe;
-       }
+       /* We have to disable on-chip IP checksum functionality
+        * when NCSI is enabled on the interface. It doesn't work
+        * in that case.
+        */
+       netdev->features = NETIF_F_IP_CSUM | NETIF_F_GRO;
+       if (priv->use_ncsi &&
+           of_get_property(pdev->dev.of_node, "no-hw-checksum", NULL))
+               netdev->features &= ~NETIF_F_IP_CSUM;
+
 
        /* register network device */
        err = register_netdev(netdev);
@@ -1246,21 +1397,12 @@ static int ftgmac100_probe(struct platform_device *pdev)
 
        netdev_info(netdev, "irq %d, mapped at %p\n", priv->irq, priv->base);
 
-       if (!is_valid_ether_addr(netdev->dev_addr)) {
-               eth_hw_addr_random(netdev);
-               netdev_info(netdev, "generated random MAC address %pM\n",
-                           netdev->dev_addr);
-       }
-
        return 0;
 
+err_ncsi_dev:
 err_register_netdev:
-       phy_disconnect(netdev->phydev);
-err_mii_probe:
-       mdiobus_unregister(priv->mii_bus);
-err_register_mdiobus:
-       mdiobus_free(priv->mii_bus);
-err_alloc_mdiobus:
+       ftgmac100_destroy_mdio(netdev);
+err_setup_mdio:
        iounmap(priv->base);
 err_ioremap:
        release_resource(priv->res);
@@ -1280,10 +1422,7 @@ static int __exit ftgmac100_remove(struct platform_device *pdev)
        priv = netdev_priv(netdev);
 
        unregister_netdev(netdev);
-
-       phy_disconnect(netdev->phydev);
-       mdiobus_unregister(priv->mii_bus);
-       mdiobus_free(priv->mii_bus);
+       ftgmac100_destroy_mdio(netdev);
 
        iounmap(priv->base);
        release_resource(priv->res);
@@ -1293,14 +1432,20 @@ static int __exit ftgmac100_remove(struct platform_device *pdev)
        return 0;
 }
 
+static const struct of_device_id ftgmac100_of_match[] = {
+       { .compatible = "faraday,ftgmac100" },
+       { }
+};
+MODULE_DEVICE_TABLE(of, ftgmac100_of_match);
+
 static struct platform_driver ftgmac100_driver = {
-       .probe          = ftgmac100_probe,
-       .remove         = __exit_p(ftgmac100_remove),
-       .driver         = {
-               .name   = DRV_NAME,
+       .probe  = ftgmac100_probe,
+       .remove = __exit_p(ftgmac100_remove),
+       .driver = {
+               .name           = DRV_NAME,
+               .of_match_table = ftgmac100_of_match,
        },
 };
-
 module_platform_driver(ftgmac100_driver);
 
 MODULE_AUTHOR("Po-Yu Chuang <ratbert@faraday-tech.com>");
index 2e25662..d11287e 100644 (file)
@@ -23,6 +23,18 @@ config HIX5HD2_GMAC
        help
          This selects the hix5hd2 mac family network device.
 
+config HISI_FEMAC
+       tristate "Hisilicon Fast Ethernet MAC device support"
+       depends on HAS_IOMEM
+       select PHYLIB
+       select RESET_CONTROLLER
+       help
+         This selects the Hisilicon Fast Ethernet MAC device(FEMAC).
+         The FEMAC receives and transmits data over Ethernet
+         ports at 10/100 Mbps in full-duplex or half-duplex mode.
+         The FEMAC exchanges data with the CPU, and supports
+         the energy efficient Ethernet (EEE).
+
 config HIP04_ETH
        tristate "HISILICON P04 Ethernet support"
        depends on HAS_IOMEM    # For MFD_SYSCON
index 390b71f..8661695 100644 (file)
@@ -6,3 +6,4 @@ obj-$(CONFIG_HIX5HD2_GMAC) += hix5hd2_gmac.o
 obj-$(CONFIG_HIP04_ETH) += hip04_eth.o
 obj-$(CONFIG_HNS_MDIO) += hns_mdio.o
 obj-$(CONFIG_HNS) += hns/
+obj-$(CONFIG_HISI_FEMAC) += hisi_femac.o
diff --git a/drivers/net/ethernet/hisilicon/hisi_femac.c b/drivers/net/ethernet/hisilicon/hisi_femac.c
new file mode 100644 (file)
index 0000000..b5d7ad0
--- /dev/null
@@ -0,0 +1,1007 @@
+/*
+ * Hisilicon Fast Ethernet MAC Driver
+ *
+ * Copyright (c) 2016 HiSilicon Technologies Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/circ_buf.h>
+#include <linux/clk.h>
+#include <linux/etherdevice.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of_mdio.h>
+#include <linux/of_net.h>
+#include <linux/platform_device.h>
+#include <linux/reset.h>
+
+/* MAC control register list */
+#define MAC_PORTSEL                    0x0200
+#define MAC_PORTSEL_STAT_CPU           BIT(0)
+#define MAC_PORTSEL_RMII               BIT(1)
+#define MAC_PORTSET                    0x0208
+#define MAC_PORTSET_DUPLEX_FULL                BIT(0)
+#define MAC_PORTSET_LINKED             BIT(1)
+#define MAC_PORTSET_SPEED_100M         BIT(2)
+#define MAC_SET                                0x0210
+#define MAX_FRAME_SIZE                 1600
+#define MAX_FRAME_SIZE_MASK            GENMASK(10, 0)
+#define BIT_PAUSE_EN                   BIT(18)
+#define RX_COALESCE_SET                        0x0340
+#define RX_COALESCED_FRAME_OFFSET      24
+#define RX_COALESCED_FRAMES            8
+#define RX_COALESCED_TIMER             0x74
+#define QLEN_SET                       0x0344
+#define RX_DEPTH_OFFSET                        8
+#define MAX_HW_FIFO_DEPTH              64
+#define HW_TX_FIFO_DEPTH               12
+#define HW_RX_FIFO_DEPTH               (MAX_HW_FIFO_DEPTH - HW_TX_FIFO_DEPTH)
+#define IQFRM_DES                      0x0354
+#define RX_FRAME_LEN_MASK              GENMASK(11, 0)
+#define IQ_ADDR                                0x0358
+#define EQ_ADDR                                0x0360
+#define EQFRM_LEN                      0x0364
+#define ADDRQ_STAT                     0x036C
+#define TX_CNT_INUSE_MASK              GENMASK(5, 0)
+#define BIT_TX_READY                   BIT(24)
+#define BIT_RX_READY                   BIT(25)
+/* global control register list */
+#define GLB_HOSTMAC_L32                        0x0000
+#define GLB_HOSTMAC_H16                        0x0004
+#define GLB_SOFT_RESET                 0x0008
+#define SOFT_RESET_ALL                 BIT(0)
+#define GLB_FWCTRL                     0x0010
+#define FWCTRL_VLAN_ENABLE             BIT(0)
+#define FWCTRL_FW2CPU_ENA              BIT(5)
+#define FWCTRL_FWALL2CPU               BIT(7)
+#define GLB_MACTCTRL                   0x0014
+#define MACTCTRL_UNI2CPU               BIT(1)
+#define MACTCTRL_MULTI2CPU             BIT(3)
+#define MACTCTRL_BROAD2CPU             BIT(5)
+#define MACTCTRL_MACT_ENA              BIT(7)
+#define GLB_IRQ_STAT                   0x0030
+#define GLB_IRQ_ENA                    0x0034
+#define IRQ_ENA_PORT0_MASK             GENMASK(7, 0)
+#define IRQ_ENA_PORT0                  BIT(18)
+#define IRQ_ENA_ALL                    BIT(19)
+#define GLB_IRQ_RAW                    0x0038
+#define IRQ_INT_RX_RDY                 BIT(0)
+#define IRQ_INT_TX_PER_PACKET          BIT(1)
+#define IRQ_INT_TX_FIFO_EMPTY          BIT(6)
+#define IRQ_INT_MULTI_RXRDY            BIT(7)
+#define DEF_INT_MASK                   (IRQ_INT_MULTI_RXRDY | \
+                                       IRQ_INT_TX_PER_PACKET | \
+                                       IRQ_INT_TX_FIFO_EMPTY)
+#define GLB_MAC_L32_BASE               0x0100
+#define GLB_MAC_H16_BASE               0x0104
+#define MACFLT_HI16_MASK               GENMASK(15, 0)
+#define BIT_MACFLT_ENA                 BIT(17)
+#define BIT_MACFLT_FW2CPU              BIT(21)
+#define GLB_MAC_H16(reg)               (GLB_MAC_H16_BASE + ((reg) * 0x8))
+#define GLB_MAC_L32(reg)               (GLB_MAC_L32_BASE + ((reg) * 0x8))
+#define MAX_MAC_FILTER_NUM             8
+#define MAX_UNICAST_ADDRESSES          2
+#define MAX_MULTICAST_ADDRESSES                (MAX_MAC_FILTER_NUM - \
+                                       MAX_UNICAST_ADDRESSES)
+/* software tx and rx queue number, should be power of 2 */
+#define TXQ_NUM                                64
+#define RXQ_NUM                                128
+#define FEMAC_POLL_WEIGHT              16
+
+#define PHY_RESET_DELAYS_PROPERTY      "hisilicon,phy-reset-delays-us"
+
+enum phy_reset_delays {
+       PRE_DELAY,
+       PULSE,
+       POST_DELAY,
+       DELAYS_NUM,
+};
+
+struct hisi_femac_queue {
+       struct sk_buff **skb;
+       dma_addr_t *dma_phys;
+       int num;
+       unsigned int head;
+       unsigned int tail;
+};
+
+struct hisi_femac_priv {
+       void __iomem *port_base;
+       void __iomem *glb_base;
+       struct clk *clk;
+       struct reset_control *mac_rst;
+       struct reset_control *phy_rst;
+       u32 phy_reset_delays[DELAYS_NUM];
+       u32 link_status;
+
+       struct device *dev;
+       struct net_device *ndev;
+
+       struct hisi_femac_queue txq;
+       struct hisi_femac_queue rxq;
+       u32 tx_fifo_used_cnt;
+       struct napi_struct napi;
+};
+
+static void hisi_femac_irq_enable(struct hisi_femac_priv *priv, int irqs)
+{
+       u32 val;
+
+       val = readl(priv->glb_base + GLB_IRQ_ENA);
+       writel(val | irqs, priv->glb_base + GLB_IRQ_ENA);
+}
+
+static void hisi_femac_irq_disable(struct hisi_femac_priv *priv, int irqs)
+{
+       u32 val;
+
+       val = readl(priv->glb_base + GLB_IRQ_ENA);
+       writel(val & (~irqs), priv->glb_base + GLB_IRQ_ENA);
+}
+
+static void hisi_femac_tx_dma_unmap(struct hisi_femac_priv *priv,
+                                   struct sk_buff *skb, unsigned int pos)
+{
+       dma_addr_t dma_addr;
+
+       dma_addr = priv->txq.dma_phys[pos];
+       dma_unmap_single(priv->dev, dma_addr, skb->len, DMA_TO_DEVICE);
+}
+
+static void hisi_femac_xmit_reclaim(struct net_device *dev)
+{
+       struct sk_buff *skb;
+       struct hisi_femac_priv *priv = netdev_priv(dev);
+       struct hisi_femac_queue *txq = &priv->txq;
+       unsigned int bytes_compl = 0, pkts_compl = 0;
+       u32 val;
+
+       netif_tx_lock(dev);
+
+       val = readl(priv->port_base + ADDRQ_STAT) & TX_CNT_INUSE_MASK;
+       while (val < priv->tx_fifo_used_cnt) {
+               skb = txq->skb[txq->tail];
+               if (unlikely(!skb)) {
+                       netdev_err(dev, "xmitq_cnt_inuse=%d, tx_fifo_used=%d\n",
+                                  val, priv->tx_fifo_used_cnt);
+                       break;
+               }
+               hisi_femac_tx_dma_unmap(priv, skb, txq->tail);
+               pkts_compl++;
+               bytes_compl += skb->len;
+               dev_kfree_skb_any(skb);
+
+               priv->tx_fifo_used_cnt--;
+
+               val = readl(priv->port_base + ADDRQ_STAT) & TX_CNT_INUSE_MASK;
+               txq->skb[txq->tail] = NULL;
+               txq->tail = (txq->tail + 1) % txq->num;
+       }
+
+       netdev_completed_queue(dev, pkts_compl, bytes_compl);
+
+       if (unlikely(netif_queue_stopped(dev)) && pkts_compl)
+               netif_wake_queue(dev);
+
+       netif_tx_unlock(dev);
+}
+
+static void hisi_femac_adjust_link(struct net_device *dev)
+{
+       struct hisi_femac_priv *priv = netdev_priv(dev);
+       struct phy_device *phy = dev->phydev;
+       u32 status = 0;
+
+       if (phy->link)
+               status |= MAC_PORTSET_LINKED;
+       if (phy->duplex == DUPLEX_FULL)
+               status |= MAC_PORTSET_DUPLEX_FULL;
+       if (phy->speed == SPEED_100)
+               status |= MAC_PORTSET_SPEED_100M;
+
+       if ((status != priv->link_status) &&
+           ((status | priv->link_status) & MAC_PORTSET_LINKED)) {
+               writel(status, priv->port_base + MAC_PORTSET);
+               priv->link_status = status;
+               phy_print_status(phy);
+       }
+}
+
+static void hisi_femac_rx_refill(struct hisi_femac_priv *priv)
+{
+       struct hisi_femac_queue *rxq = &priv->rxq;
+       struct sk_buff *skb;
+       u32 pos;
+       u32 len = MAX_FRAME_SIZE;
+       dma_addr_t addr;
+
+       pos = rxq->head;
+       while (readl(priv->port_base + ADDRQ_STAT) & BIT_RX_READY) {
+               if (!CIRC_SPACE(pos, rxq->tail, rxq->num))
+                       break;
+               if (unlikely(rxq->skb[pos])) {
+                       netdev_err(priv->ndev, "err skb[%d]=%p\n",
+                                  pos, rxq->skb[pos]);
+                       break;
+               }
+               skb = netdev_alloc_skb_ip_align(priv->ndev, len);
+               if (unlikely(!skb))
+                       break;
+
+               addr = dma_map_single(priv->dev, skb->data, len,
+                                     DMA_FROM_DEVICE);
+               if (dma_mapping_error(priv->dev, addr)) {
+                       dev_kfree_skb_any(skb);
+                       break;
+               }
+               rxq->dma_phys[pos] = addr;
+               rxq->skb[pos] = skb;
+               writel(addr, priv->port_base + IQ_ADDR);
+               pos = (pos + 1) % rxq->num;
+       }
+       rxq->head = pos;
+}
+
+static int hisi_femac_rx(struct net_device *dev, int limit)
+{
+       struct hisi_femac_priv *priv = netdev_priv(dev);
+       struct hisi_femac_queue *rxq = &priv->rxq;
+       struct sk_buff *skb;
+       dma_addr_t addr;
+       u32 rx_pkt_info, pos, len, rx_pkts_num = 0;
+
+       pos = rxq->tail;
+       while (readl(priv->glb_base + GLB_IRQ_RAW) & IRQ_INT_RX_RDY) {
+               rx_pkt_info = readl(priv->port_base + IQFRM_DES);
+               len = rx_pkt_info & RX_FRAME_LEN_MASK;
+               len -= ETH_FCS_LEN;
+
+               /* tell hardware we will deal with this packet */
+               writel(IRQ_INT_RX_RDY, priv->glb_base + GLB_IRQ_RAW);
+
+               rx_pkts_num++;
+
+               skb = rxq->skb[pos];
+               if (unlikely(!skb)) {
+                       netdev_err(dev, "rx skb NULL. pos=%d\n", pos);
+                       break;
+               }
+               rxq->skb[pos] = NULL;
+
+               addr = rxq->dma_phys[pos];
+               dma_unmap_single(priv->dev, addr, MAX_FRAME_SIZE,
+                                DMA_FROM_DEVICE);
+               skb_put(skb, len);
+               if (unlikely(skb->len > MAX_FRAME_SIZE)) {
+                       netdev_err(dev, "rcv len err, len = %d\n", skb->len);
+                       dev->stats.rx_errors++;
+                       dev->stats.rx_length_errors++;
+                       dev_kfree_skb_any(skb);
+                       goto next;
+               }
+
+               skb->protocol = eth_type_trans(skb, dev);
+               napi_gro_receive(&priv->napi, skb);
+               dev->stats.rx_packets++;
+               dev->stats.rx_bytes += skb->len;
+next:
+               pos = (pos + 1) % rxq->num;
+               if (rx_pkts_num >= limit)
+                       break;
+       }
+       rxq->tail = pos;
+
+       hisi_femac_rx_refill(priv);
+
+       return rx_pkts_num;
+}
+
+static int hisi_femac_poll(struct napi_struct *napi, int budget)
+{
+       struct hisi_femac_priv *priv = container_of(napi,
+                                       struct hisi_femac_priv, napi);
+       struct net_device *dev = priv->ndev;
+       int work_done = 0, task = budget;
+       int ints, num;
+
+       do {
+               hisi_femac_xmit_reclaim(dev);
+               num = hisi_femac_rx(dev, task);
+               work_done += num;
+               task -= num;
+               if (work_done >= budget)
+                       break;
+
+               ints = readl(priv->glb_base + GLB_IRQ_RAW);
+               writel(ints & DEF_INT_MASK,
+                      priv->glb_base + GLB_IRQ_RAW);
+       } while (ints & DEF_INT_MASK);
+
+       if (work_done < budget) {
+               napi_complete(napi);
+               hisi_femac_irq_enable(priv, DEF_INT_MASK &
+                                       (~IRQ_INT_TX_PER_PACKET));
+       }
+
+       return work_done;
+}
+
+static irqreturn_t hisi_femac_interrupt(int irq, void *dev_id)
+{
+       int ints;
+       struct net_device *dev = (struct net_device *)dev_id;
+       struct hisi_femac_priv *priv = netdev_priv(dev);
+
+       ints = readl(priv->glb_base + GLB_IRQ_RAW);
+
+       if (likely(ints & DEF_INT_MASK)) {
+               writel(ints & DEF_INT_MASK,
+                      priv->glb_base + GLB_IRQ_RAW);
+               hisi_femac_irq_disable(priv, DEF_INT_MASK);
+               napi_schedule(&priv->napi);
+       }
+
+       return IRQ_HANDLED;
+}
+
+static int hisi_femac_init_queue(struct device *dev,
+                                struct hisi_femac_queue *queue,
+                                unsigned int num)
+{
+       queue->skb = devm_kcalloc(dev, num, sizeof(struct sk_buff *),
+                                 GFP_KERNEL);
+       if (!queue->skb)
+               return -ENOMEM;
+
+       queue->dma_phys = devm_kcalloc(dev, num, sizeof(dma_addr_t),
+                                      GFP_KERNEL);
+       if (!queue->dma_phys)
+               return -ENOMEM;
+
+       queue->num = num;
+       queue->head = 0;
+       queue->tail = 0;
+
+       return 0;
+}
+
+static int hisi_femac_init_tx_and_rx_queues(struct hisi_femac_priv *priv)
+{
+       int ret;
+
+       ret = hisi_femac_init_queue(priv->dev, &priv->txq, TXQ_NUM);
+       if (ret)
+               return ret;
+
+       ret = hisi_femac_init_queue(priv->dev, &priv->rxq, RXQ_NUM);
+       if (ret)
+               return ret;
+
+       priv->tx_fifo_used_cnt = 0;
+
+       return 0;
+}
+
+static void hisi_femac_free_skb_rings(struct hisi_femac_priv *priv)
+{
+       struct hisi_femac_queue *txq = &priv->txq;
+       struct hisi_femac_queue *rxq = &priv->rxq;
+       struct sk_buff *skb;
+       dma_addr_t dma_addr;
+       u32 pos;
+
+       pos = rxq->tail;
+       while (pos != rxq->head) {
+               skb = rxq->skb[pos];
+               if (unlikely(!skb)) {
+                       netdev_err(priv->ndev, "NULL rx skb. pos=%d, head=%d\n",
+                                  pos, rxq->head);
+                       continue;
+               }
+
+               dma_addr = rxq->dma_phys[pos];
+               dma_unmap_single(priv->dev, dma_addr, MAX_FRAME_SIZE,
+                                DMA_FROM_DEVICE);
+
+               dev_kfree_skb_any(skb);
+               rxq->skb[pos] = NULL;
+               pos = (pos + 1) % rxq->num;
+       }
+       rxq->tail = pos;
+
+       pos = txq->tail;
+       while (pos != txq->head) {
+               skb = txq->skb[pos];
+               if (unlikely(!skb)) {
+                       netdev_err(priv->ndev, "NULL tx skb. pos=%d, head=%d\n",
+                                  pos, txq->head);
+                       continue;
+               }
+               hisi_femac_tx_dma_unmap(priv, skb, pos);
+               dev_kfree_skb_any(skb);
+               txq->skb[pos] = NULL;
+               pos = (pos + 1) % txq->num;
+       }
+       txq->tail = pos;
+       priv->tx_fifo_used_cnt = 0;
+}
+
+static int hisi_femac_set_hw_mac_addr(struct hisi_femac_priv *priv,
+                                     unsigned char *mac)
+{
+       u32 reg;
+
+       reg = mac[1] | (mac[0] << 8);
+       writel(reg, priv->glb_base + GLB_HOSTMAC_H16);
+
+       reg = mac[5] | (mac[4] << 8) | (mac[3] << 16) | (mac[2] << 24);
+       writel(reg, priv->glb_base + GLB_HOSTMAC_L32);
+
+       return 0;
+}
+
+static int hisi_femac_port_reset(struct hisi_femac_priv *priv)
+{
+       u32 val;
+
+       val = readl(priv->glb_base + GLB_SOFT_RESET);
+       val |= SOFT_RESET_ALL;
+       writel(val, priv->glb_base + GLB_SOFT_RESET);
+
+       usleep_range(500, 800);
+
+       val &= ~SOFT_RESET_ALL;
+       writel(val, priv->glb_base + GLB_SOFT_RESET);
+
+       return 0;
+}
+
+static int hisi_femac_net_open(struct net_device *dev)
+{
+       struct hisi_femac_priv *priv = netdev_priv(dev);
+
+       hisi_femac_port_reset(priv);
+       hisi_femac_set_hw_mac_addr(priv, dev->dev_addr);
+       hisi_femac_rx_refill(priv);
+
+       netif_carrier_off(dev);
+       netdev_reset_queue(dev);
+       netif_start_queue(dev);
+       napi_enable(&priv->napi);
+
+       priv->link_status = 0;
+       if (dev->phydev)
+               phy_start(dev->phydev);
+
+       writel(IRQ_ENA_PORT0_MASK, priv->glb_base + GLB_IRQ_RAW);
+       hisi_femac_irq_enable(priv, IRQ_ENA_ALL | IRQ_ENA_PORT0 | DEF_INT_MASK);
+
+       return 0;
+}
+
+static int hisi_femac_net_close(struct net_device *dev)
+{
+       struct hisi_femac_priv *priv = netdev_priv(dev);
+
+       hisi_femac_irq_disable(priv, IRQ_ENA_PORT0);
+
+       if (dev->phydev)
+               phy_stop(dev->phydev);
+
+       netif_stop_queue(dev);
+       napi_disable(&priv->napi);
+
+       hisi_femac_free_skb_rings(priv);
+
+       return 0;
+}
+
+static netdev_tx_t hisi_femac_net_xmit(struct sk_buff *skb,
+                                      struct net_device *dev)
+{
+       struct hisi_femac_priv *priv = netdev_priv(dev);
+       struct hisi_femac_queue *txq = &priv->txq;
+       dma_addr_t addr;
+       u32 val;
+
+       val = readl(priv->port_base + ADDRQ_STAT);
+       val &= BIT_TX_READY;
+       if (!val) {
+               hisi_femac_irq_enable(priv, IRQ_INT_TX_PER_PACKET);
+               dev->stats.tx_dropped++;
+               dev->stats.tx_fifo_errors++;
+               netif_stop_queue(dev);
+               return NETDEV_TX_BUSY;
+       }
+
+       if (unlikely(!CIRC_SPACE(txq->head, txq->tail,
+                                txq->num))) {
+               hisi_femac_irq_enable(priv, IRQ_INT_TX_PER_PACKET);
+               dev->stats.tx_dropped++;
+               dev->stats.tx_fifo_errors++;
+               netif_stop_queue(dev);
+               return NETDEV_TX_BUSY;
+       }
+
+       addr = dma_map_single(priv->dev, skb->data,
+                             skb->len, DMA_TO_DEVICE);
+       if (unlikely(dma_mapping_error(priv->dev, addr))) {
+               dev_kfree_skb_any(skb);
+               dev->stats.tx_dropped++;
+               return NETDEV_TX_OK;
+       }
+       txq->dma_phys[txq->head] = addr;
+
+       txq->skb[txq->head] = skb;
+       txq->head = (txq->head + 1) % txq->num;
+
+       writel(addr, priv->port_base + EQ_ADDR);
+       writel(skb->len + ETH_FCS_LEN, priv->port_base + EQFRM_LEN);
+
+       priv->tx_fifo_used_cnt++;
+
+       dev->stats.tx_packets++;
+       dev->stats.tx_bytes += skb->len;
+       netdev_sent_queue(dev, skb->len);
+
+       return NETDEV_TX_OK;
+}
+
+static int hisi_femac_set_mac_address(struct net_device *dev, void *p)
+{
+       struct hisi_femac_priv *priv = netdev_priv(dev);
+       struct sockaddr *skaddr = p;
+
+       if (!is_valid_ether_addr(skaddr->sa_data))
+               return -EADDRNOTAVAIL;
+
+       memcpy(dev->dev_addr, skaddr->sa_data, dev->addr_len);
+       dev->addr_assign_type &= ~NET_ADDR_RANDOM;
+
+       hisi_femac_set_hw_mac_addr(priv, dev->dev_addr);
+
+       return 0;
+}
+
+static void hisi_femac_enable_hw_addr_filter(struct hisi_femac_priv *priv,
+                                            unsigned int reg_n, bool enable)
+{
+       u32 val;
+
+       val = readl(priv->glb_base + GLB_MAC_H16(reg_n));
+       if (enable)
+               val |= BIT_MACFLT_ENA;
+       else
+               val &= ~BIT_MACFLT_ENA;
+       writel(val, priv->glb_base + GLB_MAC_H16(reg_n));
+}
+
+static void hisi_femac_set_hw_addr_filter(struct hisi_femac_priv *priv,
+                                         unsigned char *addr,
+                                         unsigned int reg_n)
+{
+       unsigned int high, low;
+       u32 val;
+
+       high = GLB_MAC_H16(reg_n);
+       low = GLB_MAC_L32(reg_n);
+
+       val = (addr[2] << 24) | (addr[3] << 16) | (addr[4] << 8) | addr[5];
+       writel(val, priv->glb_base + low);
+
+       val = readl(priv->glb_base + high);
+       val &= ~MACFLT_HI16_MASK;
+       val |= ((addr[0] << 8) | addr[1]);
+       val |= (BIT_MACFLT_ENA | BIT_MACFLT_FW2CPU);
+       writel(val, priv->glb_base + high);
+}
+
+static void hisi_femac_set_promisc_mode(struct hisi_femac_priv *priv,
+                                       bool promisc_mode)
+{
+       u32 val;
+
+       val = readl(priv->glb_base + GLB_FWCTRL);
+       if (promisc_mode)
+               val |= FWCTRL_FWALL2CPU;
+       else
+               val &= ~FWCTRL_FWALL2CPU;
+       writel(val, priv->glb_base + GLB_FWCTRL);
+}
+
+/* Handle multiple multicast addresses (perfect filtering)*/
+static void hisi_femac_set_mc_addr_filter(struct hisi_femac_priv *priv)
+{
+       struct net_device *dev = priv->ndev;
+       u32 val;
+
+       val = readl(priv->glb_base + GLB_MACTCTRL);
+       if ((netdev_mc_count(dev) > MAX_MULTICAST_ADDRESSES) ||
+           (dev->flags & IFF_ALLMULTI)) {
+               val |= MACTCTRL_MULTI2CPU;
+       } else {
+               int reg = MAX_UNICAST_ADDRESSES;
+               int i;
+               struct netdev_hw_addr *ha;
+
+               for (i = reg; i < MAX_MAC_FILTER_NUM; i++)
+                       hisi_femac_enable_hw_addr_filter(priv, i, false);
+
+               netdev_for_each_mc_addr(ha, dev) {
+                       hisi_femac_set_hw_addr_filter(priv, ha->addr, reg);
+                       reg++;
+               }
+               val &= ~MACTCTRL_MULTI2CPU;
+       }
+       writel(val, priv->glb_base + GLB_MACTCTRL);
+}
+
+/* Handle multiple unicast addresses (perfect filtering)*/
+static void hisi_femac_set_uc_addr_filter(struct hisi_femac_priv *priv)
+{
+       struct net_device *dev = priv->ndev;
+       u32 val;
+
+       val = readl(priv->glb_base + GLB_MACTCTRL);
+       if (netdev_uc_count(dev) > MAX_UNICAST_ADDRESSES) {
+               val |= MACTCTRL_UNI2CPU;
+       } else {
+               int reg = 0;
+               int i;
+               struct netdev_hw_addr *ha;
+
+               for (i = reg; i < MAX_UNICAST_ADDRESSES; i++)
+                       hisi_femac_enable_hw_addr_filter(priv, i, false);
+
+               netdev_for_each_uc_addr(ha, dev) {
+                       hisi_femac_set_hw_addr_filter(priv, ha->addr, reg);
+                       reg++;
+               }
+               val &= ~MACTCTRL_UNI2CPU;
+       }
+       writel(val, priv->glb_base + GLB_MACTCTRL);
+}
+
+static void hisi_femac_net_set_rx_mode(struct net_device *dev)
+{
+       struct hisi_femac_priv *priv = netdev_priv(dev);
+
+       if (dev->flags & IFF_PROMISC) {
+               hisi_femac_set_promisc_mode(priv, true);
+       } else {
+               hisi_femac_set_promisc_mode(priv, false);
+               hisi_femac_set_mc_addr_filter(priv);
+               hisi_femac_set_uc_addr_filter(priv);
+       }
+}
+
+static int hisi_femac_net_ioctl(struct net_device *dev,
+                               struct ifreq *ifreq, int cmd)
+{
+       if (!netif_running(dev))
+               return -EINVAL;
+
+       if (!dev->phydev)
+               return -EINVAL;
+
+       return phy_mii_ioctl(dev->phydev, ifreq, cmd);
+}
+
+static struct ethtool_ops hisi_femac_ethtools_ops = {
+       .get_link               = ethtool_op_get_link,
+       .get_link_ksettings     = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings     = phy_ethtool_set_link_ksettings,
+};
+
+static const struct net_device_ops hisi_femac_netdev_ops = {
+       .ndo_open               = hisi_femac_net_open,
+       .ndo_stop               = hisi_femac_net_close,
+       .ndo_start_xmit         = hisi_femac_net_xmit,
+       .ndo_do_ioctl           = hisi_femac_net_ioctl,
+       .ndo_set_mac_address    = hisi_femac_set_mac_address,
+       .ndo_set_rx_mode        = hisi_femac_net_set_rx_mode,
+       .ndo_change_mtu         = eth_change_mtu,
+};
+
+static void hisi_femac_core_reset(struct hisi_femac_priv *priv)
+{
+       reset_control_assert(priv->mac_rst);
+       reset_control_deassert(priv->mac_rst);
+}
+
+static void hisi_femac_sleep_us(u32 time_us)
+{
+       u32 time_ms;
+
+       if (!time_us)
+               return;
+
+       time_ms = DIV_ROUND_UP(time_us, 1000);
+       if (time_ms < 20)
+               usleep_range(time_us, time_us + 500);
+       else
+               msleep(time_ms);
+}
+
+static void hisi_femac_phy_reset(struct hisi_femac_priv *priv)
+{
+       /* To make sure PHY hardware reset success,
+        * we must keep PHY in deassert state first and
+        * then complete the hardware reset operation
+        */
+       reset_control_deassert(priv->phy_rst);
+       hisi_femac_sleep_us(priv->phy_reset_delays[PRE_DELAY]);
+
+       reset_control_assert(priv->phy_rst);
+       /* delay some time to ensure reset ok,
+        * this depends on PHY hardware feature
+        */
+       hisi_femac_sleep_us(priv->phy_reset_delays[PULSE]);
+       reset_control_deassert(priv->phy_rst);
+       /* delay some time to ensure later MDIO access */
+       hisi_femac_sleep_us(priv->phy_reset_delays[POST_DELAY]);
+}
+
+static void hisi_femac_port_init(struct hisi_femac_priv *priv)
+{
+       u32 val;
+
+       /* MAC gets link status info and phy mode by software config */
+       val = MAC_PORTSEL_STAT_CPU;
+       if (priv->ndev->phydev->interface == PHY_INTERFACE_MODE_RMII)
+               val |= MAC_PORTSEL_RMII;
+       writel(val, priv->port_base + MAC_PORTSEL);
+
+       /*clear all interrupt status */
+       writel(IRQ_ENA_PORT0_MASK, priv->glb_base + GLB_IRQ_RAW);
+       hisi_femac_irq_disable(priv, IRQ_ENA_PORT0_MASK | IRQ_ENA_PORT0);
+
+       val = readl(priv->glb_base + GLB_FWCTRL);
+       val &= ~(FWCTRL_VLAN_ENABLE | FWCTRL_FWALL2CPU);
+       val |= FWCTRL_FW2CPU_ENA;
+       writel(val, priv->glb_base + GLB_FWCTRL);
+
+       val = readl(priv->glb_base + GLB_MACTCTRL);
+       val |= (MACTCTRL_BROAD2CPU | MACTCTRL_MACT_ENA);
+       writel(val, priv->glb_base + GLB_MACTCTRL);
+
+       val = readl(priv->port_base + MAC_SET);
+       val &= ~MAX_FRAME_SIZE_MASK;
+       val |= MAX_FRAME_SIZE;
+       writel(val, priv->port_base + MAC_SET);
+
+       val = RX_COALESCED_TIMER |
+               (RX_COALESCED_FRAMES << RX_COALESCED_FRAME_OFFSET);
+       writel(val, priv->port_base + RX_COALESCE_SET);
+
+       val = (HW_RX_FIFO_DEPTH << RX_DEPTH_OFFSET) | HW_TX_FIFO_DEPTH;
+       writel(val, priv->port_base + QLEN_SET);
+}
+
+static int hisi_femac_drv_probe(struct platform_device *pdev)
+{
+       struct device *dev = &pdev->dev;
+       struct device_node *node = dev->of_node;
+       struct resource *res;
+       struct net_device *ndev;
+       struct hisi_femac_priv *priv;
+       struct phy_device *phy;
+       const char *mac_addr;
+       int ret;
+
+       ndev = alloc_etherdev(sizeof(*priv));
+       if (!ndev)
+               return -ENOMEM;
+
+       platform_set_drvdata(pdev, ndev);
+
+       priv = netdev_priv(ndev);
+       priv->dev = dev;
+       priv->ndev = ndev;
+
+       res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+       priv->port_base = devm_ioremap_resource(dev, res);
+       if (IS_ERR(priv->port_base)) {
+               ret = PTR_ERR(priv->port_base);
+               goto out_free_netdev;
+       }
+
+       res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+       priv->glb_base = devm_ioremap_resource(dev, res);
+       if (IS_ERR(priv->glb_base)) {
+               ret = PTR_ERR(priv->glb_base);
+               goto out_free_netdev;
+       }
+
+       priv->clk = devm_clk_get(&pdev->dev, NULL);
+       if (IS_ERR(priv->clk)) {
+               dev_err(dev, "failed to get clk\n");
+               ret = -ENODEV;
+               goto out_free_netdev;
+       }
+
+       ret = clk_prepare_enable(priv->clk);
+       if (ret) {
+               dev_err(dev, "failed to enable clk %d\n", ret);
+               goto out_free_netdev;
+       }
+
+       priv->mac_rst = devm_reset_control_get(dev, "mac");
+       if (IS_ERR(priv->mac_rst)) {
+               ret = PTR_ERR(priv->mac_rst);
+               goto out_disable_clk;
+       }
+       hisi_femac_core_reset(priv);
+
+       priv->phy_rst = devm_reset_control_get(dev, "phy");
+       if (IS_ERR(priv->phy_rst)) {
+               priv->phy_rst = NULL;
+       } else {
+               ret = of_property_read_u32_array(node,
+                                                PHY_RESET_DELAYS_PROPERTY,
+                                                priv->phy_reset_delays,
+                                                DELAYS_NUM);
+               if (ret)
+                       goto out_disable_clk;
+               hisi_femac_phy_reset(priv);
+       }
+
+       phy = of_phy_get_and_connect(ndev, node, hisi_femac_adjust_link);
+       if (!phy) {
+               dev_err(dev, "connect to PHY failed!\n");
+               ret = -ENODEV;
+               goto out_disable_clk;
+       }
+
+       phy_attached_print(phy, "phy_id=0x%.8lx, phy_mode=%s\n",
+                          (unsigned long)phy->phy_id,
+                          phy_modes(phy->interface));
+
+       mac_addr = of_get_mac_address(node);
+       if (mac_addr)
+               ether_addr_copy(ndev->dev_addr, mac_addr);
+       if (!is_valid_ether_addr(ndev->dev_addr)) {
+               eth_hw_addr_random(ndev);
+               dev_warn(dev, "using random MAC address %pM\n",
+                        ndev->dev_addr);
+       }
+
+       ndev->watchdog_timeo = 6 * HZ;
+       ndev->priv_flags |= IFF_UNICAST_FLT;
+       ndev->netdev_ops = &hisi_femac_netdev_ops;
+       ndev->ethtool_ops = &hisi_femac_ethtools_ops;
+       netif_napi_add(ndev, &priv->napi, hisi_femac_poll, FEMAC_POLL_WEIGHT);
+       SET_NETDEV_DEV(ndev, &pdev->dev);
+
+       hisi_femac_port_init(priv);
+
+       ret = hisi_femac_init_tx_and_rx_queues(priv);
+       if (ret)
+               goto out_disconnect_phy;
+
+       ndev->irq = platform_get_irq(pdev, 0);
+       if (ndev->irq <= 0) {
+               dev_err(dev, "No irq resource\n");
+               ret = -ENODEV;
+               goto out_disconnect_phy;
+       }
+
+       ret = devm_request_irq(dev, ndev->irq, hisi_femac_interrupt,
+                              IRQF_SHARED, pdev->name, ndev);
+       if (ret) {
+               dev_err(dev, "devm_request_irq %d failed!\n", ndev->irq);
+               goto out_disconnect_phy;
+       }
+
+       ret = register_netdev(ndev);
+       if (ret) {
+               dev_err(dev, "register_netdev failed!\n");
+               goto out_disconnect_phy;
+       }
+
+       return ret;
+
+out_disconnect_phy:
+       netif_napi_del(&priv->napi);
+       phy_disconnect(phy);
+out_disable_clk:
+       clk_disable_unprepare(priv->clk);
+out_free_netdev:
+       free_netdev(ndev);
+
+       return ret;
+}
+
+static int hisi_femac_drv_remove(struct platform_device *pdev)
+{
+       struct net_device *ndev = platform_get_drvdata(pdev);
+       struct hisi_femac_priv *priv = netdev_priv(ndev);
+
+       netif_napi_del(&priv->napi);
+       unregister_netdev(ndev);
+
+       phy_disconnect(ndev->phydev);
+       clk_disable_unprepare(priv->clk);
+       free_netdev(ndev);
+
+       return 0;
+}
+
+#ifdef CONFIG_PM
+int hisi_femac_drv_suspend(struct platform_device *pdev,
+                          pm_message_t state)
+{
+       struct net_device *ndev = platform_get_drvdata(pdev);
+       struct hisi_femac_priv *priv = netdev_priv(ndev);
+
+       disable_irq(ndev->irq);
+       if (netif_running(ndev)) {
+               hisi_femac_net_close(ndev);
+               netif_device_detach(ndev);
+       }
+
+       clk_disable_unprepare(priv->clk);
+
+       return 0;
+}
+
+int hisi_femac_drv_resume(struct platform_device *pdev)
+{
+       struct net_device *ndev = platform_get_drvdata(pdev);
+       struct hisi_femac_priv *priv = netdev_priv(ndev);
+
+       clk_prepare_enable(priv->clk);
+       if (priv->phy_rst)
+               hisi_femac_phy_reset(priv);
+
+       if (netif_running(ndev)) {
+               hisi_femac_port_init(priv);
+               hisi_femac_net_open(ndev);
+               netif_device_attach(ndev);
+       }
+       enable_irq(ndev->irq);
+
+       return 0;
+}
+#endif
+
+static const struct of_device_id hisi_femac_match[] = {
+       {.compatible = "hisilicon,hisi-femac-v1",},
+       {.compatible = "hisilicon,hisi-femac-v2",},
+       {.compatible = "hisilicon,hi3516cv300-femac",},
+       {},
+};
+
+MODULE_DEVICE_TABLE(of, hisi_femac_match);
+
+static struct platform_driver hisi_femac_driver = {
+       .driver = {
+               .name = "hisi-femac",
+               .of_match_table = hisi_femac_match,
+       },
+       .probe = hisi_femac_drv_probe,
+       .remove = hisi_femac_drv_remove,
+#ifdef CONFIG_PM
+       .suspend = hisi_femac_drv_suspend,
+       .resume = hisi_femac_drv_resume,
+#endif
+};
+
+module_platform_driver(hisi_femac_driver);
+
+MODULE_DESCRIPTION("Hisilicon Fast Ethernet MAC driver");
+MODULE_AUTHOR("Dongpo Li <lidongpo@hisilicon.com>");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("platform:hisi-femac");
index ecdb685..88f3c85 100644 (file)
@@ -75,6 +75,7 @@
 #include <linux/uaccess.h>
 #include <asm/firmware.h>
 #include <linux/seq_file.h>
+#include <linux/workqueue.h>
 
 #include "ibmvnic.h"
 
@@ -89,6 +90,7 @@ MODULE_VERSION(IBMVNIC_DRIVER_VERSION);
 static int ibmvnic_version = IBMVNIC_INITIAL_VERSION;
 static int ibmvnic_remove(struct vio_dev *);
 static void release_sub_crqs(struct ibmvnic_adapter *);
+static void release_sub_crqs_no_irqs(struct ibmvnic_adapter *);
 static int ibmvnic_reset_crq(struct ibmvnic_adapter *);
 static int ibmvnic_send_crq_init(struct ibmvnic_adapter *);
 static int ibmvnic_reenable_crq_queue(struct ibmvnic_adapter *);
@@ -469,7 +471,8 @@ static int ibmvnic_open(struct net_device *netdev)
        crq.logical_link_state.link_state = IBMVNIC_LOGICAL_LNK_UP;
        ibmvnic_send_crq(adapter, &crq);
 
-       netif_start_queue(netdev);
+       netif_tx_start_all_queues(netdev);
+
        return 0;
 
 bounce_map_failed:
@@ -519,7 +522,7 @@ static int ibmvnic_close(struct net_device *netdev)
        for (i = 0; i < adapter->req_rx_queues; i++)
                napi_disable(&adapter->napi[i]);
 
-       netif_stop_queue(netdev);
+       netif_tx_stop_all_queues(netdev);
 
        if (adapter->bounce_buffer) {
                if (!dma_mapping_error(dev, adapter->bounce_buffer_dma)) {
@@ -1212,12 +1215,6 @@ static struct ibmvnic_sub_crq_queue *init_sub_crq_queue(struct ibmvnic_adapter
                goto reg_failed;
        }
 
-       scrq->irq = irq_create_mapping(NULL, scrq->hw_irq);
-       if (scrq->irq == NO_IRQ) {
-               dev_err(dev, "Error mapping irq\n");
-               goto map_irq_failed;
-       }
-
        scrq->adapter = adapter;
        scrq->size = 4 * PAGE_SIZE / sizeof(*scrq->msgs);
        scrq->cur = 0;
@@ -1230,12 +1227,6 @@ static struct ibmvnic_sub_crq_queue *init_sub_crq_queue(struct ibmvnic_adapter
 
        return scrq;
 
-map_irq_failed:
-       do {
-               rc = plpar_hcall_norets(H_FREE_SUB_CRQ,
-                                       adapter->vdev->unit_address,
-                                       scrq->crq_num);
-       } while (rc == H_BUSY || H_IS_LONG_BUSY(rc));
 reg_failed:
        dma_unmap_single(dev, scrq->msg_token, 4 * PAGE_SIZE,
                         DMA_BIDIRECTIONAL);
@@ -1256,6 +1247,7 @@ static void release_sub_crqs(struct ibmvnic_adapter *adapter)
                        if (adapter->tx_scrq[i]) {
                                free_irq(adapter->tx_scrq[i]->irq,
                                         adapter->tx_scrq[i]);
+                               irq_dispose_mapping(adapter->tx_scrq[i]->irq);
                                release_sub_crq_queue(adapter,
                                                      adapter->tx_scrq[i]);
                        }
@@ -1267,6 +1259,7 @@ static void release_sub_crqs(struct ibmvnic_adapter *adapter)
                        if (adapter->rx_scrq[i]) {
                                free_irq(adapter->rx_scrq[i]->irq,
                                         adapter->rx_scrq[i]);
+                               irq_dispose_mapping(adapter->rx_scrq[i]->irq);
                                release_sub_crq_queue(adapter,
                                                      adapter->rx_scrq[i]);
                        }
@@ -1276,6 +1269,29 @@ static void release_sub_crqs(struct ibmvnic_adapter *adapter)
        adapter->requested_caps = 0;
 }
 
+static void release_sub_crqs_no_irqs(struct ibmvnic_adapter *adapter)
+{
+       int i;
+
+       if (adapter->tx_scrq) {
+               for (i = 0; i < adapter->req_tx_queues; i++)
+                       if (adapter->tx_scrq[i])
+                               release_sub_crq_queue(adapter,
+                                                     adapter->tx_scrq[i]);
+               adapter->tx_scrq = NULL;
+       }
+
+       if (adapter->rx_scrq) {
+               for (i = 0; i < adapter->req_rx_queues; i++)
+                       if (adapter->rx_scrq[i])
+                               release_sub_crq_queue(adapter,
+                                                     adapter->rx_scrq[i]);
+               adapter->rx_scrq = NULL;
+       }
+
+       adapter->requested_caps = 0;
+}
+
 static int disable_scrq_irq(struct ibmvnic_adapter *adapter,
                            struct ibmvnic_sub_crq_queue *scrq)
 {
@@ -1395,6 +1411,66 @@ static irqreturn_t ibmvnic_interrupt_rx(int irq, void *instance)
        return IRQ_HANDLED;
 }
 
+static int init_sub_crq_irqs(struct ibmvnic_adapter *adapter)
+{
+       struct device *dev = &adapter->vdev->dev;
+       struct ibmvnic_sub_crq_queue *scrq;
+       int i = 0, j = 0;
+       int rc = 0;
+
+       for (i = 0; i < adapter->req_tx_queues; i++) {
+               scrq = adapter->tx_scrq[i];
+               scrq->irq = irq_create_mapping(NULL, scrq->hw_irq);
+
+               if (scrq->irq == NO_IRQ) {
+                       rc = -EINVAL;
+                       dev_err(dev, "Error mapping irq\n");
+                       goto req_tx_irq_failed;
+               }
+
+               rc = request_irq(scrq->irq, ibmvnic_interrupt_tx,
+                                0, "ibmvnic_tx", scrq);
+
+               if (rc) {
+                       dev_err(dev, "Couldn't register tx irq 0x%x. rc=%d\n",
+                               scrq->irq, rc);
+                       irq_dispose_mapping(scrq->irq);
+                       goto req_rx_irq_failed;
+               }
+       }
+
+       for (i = 0; i < adapter->req_rx_queues; i++) {
+               scrq = adapter->rx_scrq[i];
+               scrq->irq = irq_create_mapping(NULL, scrq->hw_irq);
+               if (scrq->irq == NO_IRQ) {
+                       rc = -EINVAL;
+                       dev_err(dev, "Error mapping irq\n");
+                       goto req_rx_irq_failed;
+               }
+               rc = request_irq(scrq->irq, ibmvnic_interrupt_rx,
+                                0, "ibmvnic_rx", scrq);
+               if (rc) {
+                       dev_err(dev, "Couldn't register rx irq 0x%x. rc=%d\n",
+                               scrq->irq, rc);
+                       irq_dispose_mapping(scrq->irq);
+                       goto req_rx_irq_failed;
+               }
+       }
+       return rc;
+
+req_rx_irq_failed:
+       for (j = 0; j < i; j++)
+               free_irq(adapter->rx_scrq[j]->irq, adapter->rx_scrq[j]);
+               irq_dispose_mapping(adapter->rx_scrq[j]->irq);
+       i = adapter->req_tx_queues;
+req_tx_irq_failed:
+       for (j = 0; j < i; j++)
+               free_irq(adapter->tx_scrq[j]->irq, adapter->tx_scrq[j]);
+               irq_dispose_mapping(adapter->rx_scrq[j]->irq);
+       release_sub_crqs_no_irqs(adapter);
+       return rc;
+}
+
 static void init_sub_crqs(struct ibmvnic_adapter *adapter, int retry)
 {
        struct device *dev = &adapter->vdev->dev;
@@ -1403,8 +1479,7 @@ static void init_sub_crqs(struct ibmvnic_adapter *adapter, int retry)
        union ibmvnic_crq crq;
        int total_queues;
        int more = 0;
-       int i, j;
-       int rc;
+       int i;
 
        if (!retry) {
                /* Sub-CRQ entries are 32 byte long */
@@ -1483,13 +1558,6 @@ static void init_sub_crqs(struct ibmvnic_adapter *adapter, int retry)
        for (i = 0; i < adapter->req_tx_queues; i++) {
                adapter->tx_scrq[i] = allqueues[i];
                adapter->tx_scrq[i]->pool_index = i;
-               rc = request_irq(adapter->tx_scrq[i]->irq, ibmvnic_interrupt_tx,
-                                0, "ibmvnic_tx", adapter->tx_scrq[i]);
-               if (rc) {
-                       dev_err(dev, "Couldn't register tx irq 0x%x. rc=%d\n",
-                               adapter->tx_scrq[i]->irq, rc);
-                       goto req_tx_irq_failed;
-               }
        }
 
        adapter->rx_scrq = kcalloc(adapter->req_rx_queues,
@@ -1500,13 +1568,6 @@ static void init_sub_crqs(struct ibmvnic_adapter *adapter, int retry)
        for (i = 0; i < adapter->req_rx_queues; i++) {
                adapter->rx_scrq[i] = allqueues[i + adapter->req_tx_queues];
                adapter->rx_scrq[i]->scrq_num = i;
-               rc = request_irq(adapter->rx_scrq[i]->irq, ibmvnic_interrupt_rx,
-                                0, "ibmvnic_rx", adapter->rx_scrq[i]);
-               if (rc) {
-                       dev_err(dev, "Couldn't register rx irq 0x%x. rc=%d\n",
-                               adapter->rx_scrq[i]->irq, rc);
-                       goto req_rx_irq_failed;
-               }
        }
 
        memset(&crq, 0, sizeof(crq));
@@ -1559,15 +1620,6 @@ static void init_sub_crqs(struct ibmvnic_adapter *adapter, int retry)
 
        return;
 
-req_rx_irq_failed:
-       for (j = 0; j < i; j++)
-               free_irq(adapter->rx_scrq[j]->irq, adapter->rx_scrq[j]);
-       i = adapter->req_tx_queues;
-req_tx_irq_failed:
-       for (j = 0; j < i; j++)
-               free_irq(adapter->tx_scrq[j]->irq, adapter->tx_scrq[j]);
-       kfree(adapter->rx_scrq);
-       adapter->rx_scrq = NULL;
 rx_failed:
        kfree(adapter->tx_scrq);
        adapter->tx_scrq = NULL;
@@ -2348,9 +2400,9 @@ static void handle_request_cap_rsp(union ibmvnic_crq *crq,
                         *req_value,
                         (long int)be32_to_cpu(crq->request_capability_rsp.
                                               number), name);
-               release_sub_crqs(adapter);
+               release_sub_crqs_no_irqs(adapter);
                *req_value = be32_to_cpu(crq->request_capability_rsp.number);
-               complete(&adapter->init_done);
+               init_sub_crqs(adapter, 1);
                return;
        default:
                dev_err(dev, "Error %d in request cap rsp\n",
@@ -2659,7 +2711,7 @@ static void handle_query_cap_rsp(union ibmvnic_crq *crq,
 
 out:
        if (atomic_read(&adapter->running_cap_queries) == 0)
-               complete(&adapter->init_done);
+               init_sub_crqs(adapter, 0);
                /* We're done querying the capabilities, initialize sub-crqs */
 }
 
@@ -3202,8 +3254,8 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
                        dev_info(dev, "Partner initialized\n");
                        /* Send back a response */
                        rc = ibmvnic_send_crq_init_complete(adapter);
-                       if (rc == 0)
-                               send_version_xchg(adapter);
+                       if (!rc)
+                               schedule_work(&adapter->vnic_crq_init);
                        else
                                dev_err(dev, "Can't send initrsp rc=%ld\n", rc);
                        break;
@@ -3555,8 +3607,63 @@ static const struct file_operations ibmvnic_dump_ops = {
        .release        = single_release,
 };
 
+static void handle_crq_init_rsp(struct work_struct *work)
+{
+       struct ibmvnic_adapter *adapter = container_of(work,
+                                                      struct ibmvnic_adapter,
+                                                      vnic_crq_init);
+       struct device *dev = &adapter->vdev->dev;
+       struct net_device *netdev = adapter->netdev;
+       unsigned long timeout = msecs_to_jiffies(30000);
+       int rc;
+
+       send_version_xchg(adapter);
+       reinit_completion(&adapter->init_done);
+       if (!wait_for_completion_timeout(&adapter->init_done, timeout)) {
+               dev_err(dev, "Passive init timeout\n");
+               goto task_failed;
+       }
+
+       do {
+               if (adapter->renegotiate) {
+                       adapter->renegotiate = false;
+                       release_sub_crqs_no_irqs(adapter);
+                       send_cap_queries(adapter);
+
+                       reinit_completion(&adapter->init_done);
+                       if (!wait_for_completion_timeout(&adapter->init_done,
+                                                        timeout)) {
+                               dev_err(dev, "Passive init timeout\n");
+                               goto task_failed;
+                       }
+               }
+       } while (adapter->renegotiate);
+       rc = init_sub_crq_irqs(adapter);
+
+       if (rc)
+               goto task_failed;
+
+       netdev->real_num_tx_queues = adapter->req_tx_queues;
+
+       rc = register_netdev(netdev);
+       if (rc) {
+               dev_err(dev,
+                       "failed to register netdev rc=%d\n", rc);
+               goto register_failed;
+       }
+       dev_info(dev, "ibmvnic registered\n");
+
+       return;
+
+register_failed:
+       release_sub_crqs(adapter);
+task_failed:
+       dev_err(dev, "Passive initialization was not successful\n");
+}
+
 static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
 {
+       unsigned long timeout = msecs_to_jiffies(30000);
        struct ibmvnic_adapter *adapter;
        struct net_device *netdev;
        unsigned char *mac_addr_p;
@@ -3593,6 +3700,8 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
        netdev->ethtool_ops = &ibmvnic_ethtool_ops;
        SET_NETDEV_DEV(netdev, &dev->dev);
 
+       INIT_WORK(&adapter->vnic_crq_init, handle_crq_init_rsp);
+
        spin_lock_init(&adapter->stats_lock);
 
        rc = ibmvnic_init_crq_queue(adapter);
@@ -3635,30 +3744,26 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
        ibmvnic_send_crq_init(adapter);
 
        init_completion(&adapter->init_done);
-       wait_for_completion(&adapter->init_done);
+       if (!wait_for_completion_timeout(&adapter->init_done, timeout))
+               return 0;
 
        do {
-               adapter->renegotiate = false;
-
-               init_sub_crqs(adapter, 0);
-               reinit_completion(&adapter->init_done);
-               wait_for_completion(&adapter->init_done);
-
                if (adapter->renegotiate) {
-                       release_sub_crqs(adapter);
+                       adapter->renegotiate = false;
+                       release_sub_crqs_no_irqs(adapter);
                        send_cap_queries(adapter);
 
                        reinit_completion(&adapter->init_done);
-                       wait_for_completion(&adapter->init_done);
+                       if (!wait_for_completion_timeout(&adapter->init_done,
+                                                        timeout))
+                               return 0;
                }
        } while (adapter->renegotiate);
 
-       /* if init_sub_crqs is partially successful, retry */
-       while (!adapter->tx_scrq || !adapter->rx_scrq) {
-               init_sub_crqs(adapter, 1);
-
-               reinit_completion(&adapter->init_done);
-               wait_for_completion(&adapter->init_done);
+       rc = init_sub_crq_irqs(adapter);
+       if (rc) {
+               dev_err(&dev->dev, "failed to initialize sub crq irqs\n");
+               goto free_debugfs;
        }
 
        netdev->real_num_tx_queues = adapter->req_tx_queues;
@@ -3666,12 +3771,14 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
        rc = register_netdev(netdev);
        if (rc) {
                dev_err(&dev->dev, "failed to register netdev rc=%d\n", rc);
-               goto free_debugfs;
+               goto free_sub_crqs;
        }
        dev_info(&dev->dev, "ibmvnic registered\n");
 
        return 0;
 
+free_sub_crqs:
+       release_sub_crqs(adapter);
 free_debugfs:
        if (adapter->debugfs_dir && !IS_ERR(adapter->debugfs_dir))
                debugfs_remove_recursive(adapter->debugfs_dir);
index 0b66a50..e82898f 100644 (file)
@@ -1045,4 +1045,6 @@ struct ibmvnic_adapter {
        u64 opt_rxba_entries_per_subcrq;
        __be64 tx_rx_desc_req;
        u8 map_id;
+
+       struct work_struct vnic_crq_init;
 };
index e98b86b..c4cf08d 100644 (file)
@@ -362,6 +362,7 @@ enum fm10k_state_t {
        __FM10K_SERVICE_DISABLE,
        __FM10K_MBX_LOCK,
        __FM10K_LINK_DOWN,
+       __FM10K_UPDATING_STATS,
 };
 
 static inline void fm10k_mbx_lock(struct fm10k_intfc *interface)
@@ -457,6 +458,7 @@ __be16 fm10k_tx_encap_offload(struct sk_buff *skb);
 netdev_tx_t fm10k_xmit_frame_ring(struct sk_buff *skb,
                                  struct fm10k_ring *tx_ring);
 void fm10k_tx_timeout_reset(struct fm10k_intfc *interface);
+u64 fm10k_get_tx_pending(struct fm10k_ring *ring);
 bool fm10k_check_tx_hang(struct fm10k_ring *tx_ring);
 void fm10k_alloc_rx_buffers(struct fm10k_ring *rx_ring, u16 cleaned_count);
 
index 5bbf19c..d6baaea 100644 (file)
@@ -519,8 +519,12 @@ s32 fm10k_get_host_state_generic(struct fm10k_hw *hw, bool *host_ready)
                goto out;
 
        /* interface cannot receive traffic without logical ports */
-       if (mac->dglort_map == FM10K_DGLORTMAP_NONE)
+       if (mac->dglort_map == FM10K_DGLORTMAP_NONE) {
+               if (hw->mac.ops.request_lport_map)
+                       ret_val = hw->mac.ops.request_lport_map(hw);
+
                goto out;
+       }
 
        /* if we passed all the tests above then the switch is ready and we no
         * longer need to check for link
index 9b51954..c04cbe9 100644 (file)
@@ -76,6 +76,8 @@ static const struct fm10k_stats fm10k_gstrings_global_stats[] = {
        FM10K_STAT("mac_rules_used", hw.swapi.mac.used),
        FM10K_STAT("mac_rules_avail", hw.swapi.mac.avail),
 
+       FM10K_STAT("reset_while_pending", hw.mac.reset_while_pending),
+
        FM10K_STAT("tx_hang_count", tx_timeout_count),
 };
 
index a9ccc1e..e9767b6 100644 (file)
@@ -28,7 +28,7 @@
 
 #include "fm10k.h"
 
-#define DRV_VERSION    "0.19.3-k"
+#define DRV_VERSION    "0.21.2-k"
 #define DRV_SUMMARY    "Intel(R) Ethernet Switch Host Interface Driver"
 const char fm10k_driver_version[] = DRV_VERSION;
 char fm10k_driver_name[] = "fm10k";
@@ -1128,11 +1128,13 @@ static u64 fm10k_get_tx_completed(struct fm10k_ring *ring)
        return ring->stats.packets;
 }
 
-static u64 fm10k_get_tx_pending(struct fm10k_ring *ring)
+u64 fm10k_get_tx_pending(struct fm10k_ring *ring)
 {
-       /* use SW head and tail until we have real hardware */
-       u32 head = ring->next_to_clean;
-       u32 tail = ring->next_to_use;
+       struct fm10k_intfc *interface = ring->q_vector->interface;
+       struct fm10k_hw *hw = &interface->hw;
+
+       u32 head = fm10k_read_reg(hw, FM10K_TDH(ring->reg_idx));
+       u32 tail = fm10k_read_reg(hw, FM10K_TDT(ring->reg_idx));
 
        return ((head <= tail) ? tail : tail + ring->count) - head;
 }
@@ -1856,7 +1858,7 @@ static int fm10k_init_msix_capability(struct fm10k_intfc *interface)
        if (v_budget < 0) {
                kfree(interface->msix_entries);
                interface->msix_entries = NULL;
-               return -ENOMEM;
+               return v_budget;
        }
 
        /* record the number of queues available for q_vectors */
index b7dbc8a..35c1dba 100644 (file)
@@ -41,6 +41,8 @@ struct fm10k_mbx_info;
 #define FM10K_MBX_ACK_INTERRUPT                        0x00000010
 #define FM10K_MBX_INTERRUPT_ENABLE             0x00000020
 #define FM10K_MBX_INTERRUPT_DISABLE            0x00000040
+#define FM10K_MBX_GLOBAL_REQ_INTERRUPT         0x00000200
+#define FM10K_MBX_GLOBAL_ACK_INTERRUPT         0x00000400
 #define FM10K_MBICR(_n)                ((_n) + 0x18840)
 #define FM10K_GMBX             0x18842
 
index d00cb19..20a5bbe 100644 (file)
@@ -432,9 +432,7 @@ static void fm10k_restore_vxlan_port(struct fm10k_intfc *interface)
 /**
  * fm10k_add_vxlan_port
  * @netdev: network interface device structure
- * @sa_family: Address family of new port
- * @port: port number used for VXLAN
- * @type: Enumerated value specifying udp encapsulation type
+ * @ti: Tunnel endpoint information
  *
  * This function is called when a new VXLAN interface has added a new port
  * number to the range that is currently in use for VXLAN.  The new port
@@ -480,9 +478,7 @@ insert_tail:
 /**
  * fm10k_del_vxlan_port
  * @netdev: network interface device structure
- * @sa_family: Address family of freed port
- * @port: port number used for VXLAN
- * @type: Enumerated value specifying udp encapsulation type
+ * @ti: Tunnel endpoint information
  *
  * This function is called when a new VXLAN interface has freed a port
  * number from the range that is currently in use for VXLAN.  The freed
index e05aca9..b8245c7 100644 (file)
@@ -123,11 +123,24 @@ static void fm10k_service_timer(unsigned long data)
 static void fm10k_detach_subtask(struct fm10k_intfc *interface)
 {
        struct net_device *netdev = interface->netdev;
+       u32 __iomem *hw_addr;
+       u32 value;
 
        /* do nothing if device is still present or hw_addr is set */
        if (netif_device_present(netdev) || interface->hw.hw_addr)
                return;
 
+       /* check the real address space to see if we've recovered */
+       hw_addr = READ_ONCE(interface->uc_addr);
+       value = readl(hw_addr);
+       if ((~value)) {
+               interface->hw.hw_addr = interface->uc_addr;
+               netif_device_attach(netdev);
+               interface->flags |= FM10K_FLAG_RESET_REQUESTED;
+               netdev_warn(netdev, "PCIe link restored, device now attached\n");
+               return;
+       }
+
        rtnl_lock();
 
        if (netif_running(netdev))
@@ -136,11 +149,9 @@ static void fm10k_detach_subtask(struct fm10k_intfc *interface)
        rtnl_unlock();
 }
 
-static void fm10k_reinit(struct fm10k_intfc *interface)
+static void fm10k_prepare_for_reset(struct fm10k_intfc *interface)
 {
        struct net_device *netdev = interface->netdev;
-       struct fm10k_hw *hw = &interface->hw;
-       int err;
 
        WARN_ON(in_interrupt());
 
@@ -165,6 +176,19 @@ static void fm10k_reinit(struct fm10k_intfc *interface)
        /* delay any future reset requests */
        interface->last_reset = jiffies + (10 * HZ);
 
+       rtnl_unlock();
+}
+
+static int fm10k_handle_reset(struct fm10k_intfc *interface)
+{
+       struct net_device *netdev = interface->netdev;
+       struct fm10k_hw *hw = &interface->hw;
+       int err;
+
+       rtnl_lock();
+
+       pci_set_master(interface->pdev);
+
        /* reset and initialize the hardware so it is in a known state */
        err = hw->mac.ops.reset_hw(hw);
        if (err) {
@@ -185,7 +209,7 @@ static void fm10k_reinit(struct fm10k_intfc *interface)
                goto reinit_err;
        }
 
-       /* reassociate interrupts */
+       /* re-associate interrupts */
        err = fm10k_mbx_request_irq(interface);
        if (err)
                goto err_mbx_irq;
@@ -219,7 +243,7 @@ static void fm10k_reinit(struct fm10k_intfc *interface)
 
        clear_bit(__FM10K_RESETTING, &interface->state);
 
-       return;
+       return err;
 err_open:
        fm10k_mbx_free_irq(interface);
 err_mbx_irq:
@@ -230,6 +254,20 @@ reinit_err:
        rtnl_unlock();
 
        clear_bit(__FM10K_RESETTING, &interface->state);
+
+       return err;
+}
+
+static void fm10k_reinit(struct fm10k_intfc *interface)
+{
+       int err;
+
+       fm10k_prepare_for_reset(interface);
+
+       err = fm10k_handle_reset(interface);
+       if (err)
+               dev_err(&interface->pdev->dev,
+                       "fm10k_handle_reset failed: %d\n", err);
 }
 
 static void fm10k_reset_subtask(struct fm10k_intfc *interface)
@@ -372,12 +410,19 @@ void fm10k_update_stats(struct fm10k_intfc *interface)
        u64 bytes, pkts;
        int i;
 
+       /* ensure only one thread updates stats at a time */
+       if (test_and_set_bit(__FM10K_UPDATING_STATS, &interface->state))
+               return;
+
        /* do not allow stats update via service task for next second */
        interface->next_stats_update = jiffies + HZ;
 
        /* gather some stats to the interface struct that are per queue */
        for (bytes = 0, pkts = 0, i = 0; i < interface->num_tx_queues; i++) {
-               struct fm10k_ring *tx_ring = interface->tx_ring[i];
+               struct fm10k_ring *tx_ring = READ_ONCE(interface->tx_ring[i]);
+
+               if (!tx_ring)
+                       continue;
 
                restart_queue += tx_ring->tx_stats.restart_queue;
                tx_busy += tx_ring->tx_stats.tx_busy;
@@ -396,7 +441,10 @@ void fm10k_update_stats(struct fm10k_intfc *interface)
 
        /* gather some stats to the interface struct that are per queue */
        for (bytes = 0, pkts = 0, i = 0; i < interface->num_rx_queues; i++) {
-               struct fm10k_ring *rx_ring = interface->rx_ring[i];
+               struct fm10k_ring *rx_ring = READ_ONCE(interface->rx_ring[i]);
+
+               if (!rx_ring)
+                       continue;
 
                bytes += rx_ring->stats.bytes;
                pkts += rx_ring->stats.packets;
@@ -443,6 +491,8 @@ void fm10k_update_stats(struct fm10k_intfc *interface)
        /* Fill out the OS statistics structure */
        net_stats->rx_errors = rx_errors;
        net_stats->rx_dropped = interface->stats.nodesc_drop.count;
+
+       clear_bit(__FM10K_UPDATING_STATS, &interface->state);
 }
 
 /**
@@ -1566,6 +1616,9 @@ void fm10k_up(struct fm10k_intfc *interface)
        /* configure interrupts */
        hw->mac.ops.update_int_moderator(hw);
 
+       /* enable statistics capture again */
+       clear_bit(__FM10K_UPDATING_STATS, &interface->state);
+
        /* clear down bit to indicate we are ready to go */
        clear_bit(__FM10K_DOWN, &interface->state);
 
@@ -1598,10 +1651,11 @@ void fm10k_down(struct fm10k_intfc *interface)
 {
        struct net_device *netdev = interface->netdev;
        struct fm10k_hw *hw = &interface->hw;
-       int err;
+       int err, i = 0, count = 0;
 
        /* signal that we are down to the interrupt handler and service task */
-       set_bit(__FM10K_DOWN, &interface->state);
+       if (test_and_set_bit(__FM10K_DOWN, &interface->state))
+               return;
 
        /* call carrier off first to avoid false dev_watchdog timeouts */
        netif_carrier_off(netdev);
@@ -1613,18 +1667,57 @@ void fm10k_down(struct fm10k_intfc *interface)
        /* reset Rx filters */
        fm10k_reset_rx_state(interface);
 
-       /* allow 10ms for device to quiesce */
-       usleep_range(10000, 20000);
-
        /* disable polling routines */
        fm10k_napi_disable_all(interface);
 
        /* capture stats one last time before stopping interface */
        fm10k_update_stats(interface);
 
+       /* prevent updating statistics while we're down */
+       while (test_and_set_bit(__FM10K_UPDATING_STATS, &interface->state))
+               usleep_range(1000, 2000);
+
+       /* skip waiting for TX DMA if we lost PCIe link */
+       if (FM10K_REMOVED(hw->hw_addr))
+               goto skip_tx_dma_drain;
+
+       /* In some rare circumstances it can take a while for Tx queues to
+        * quiesce and be fully disabled. Attempt to .stop_hw() first, and
+        * then if we get ERR_REQUESTS_PENDING, go ahead and wait in a loop
+        * until the Tx queues have emptied, or until a number of retries. If
+        * we fail to clear within the retry loop, we will issue a warning
+        * indicating that Tx DMA is probably hung. Note this means we call
+        * .stop_hw() twice but this shouldn't cause any problems.
+        */
+       err = hw->mac.ops.stop_hw(hw);
+       if (err != FM10K_ERR_REQUESTS_PENDING)
+               goto skip_tx_dma_drain;
+
+#define TX_DMA_DRAIN_RETRIES 25
+       for (count = 0; count < TX_DMA_DRAIN_RETRIES; count++) {
+               usleep_range(10000, 20000);
+
+               /* start checking at the last ring to have pending Tx */
+               for (; i < interface->num_tx_queues; i++)
+                       if (fm10k_get_tx_pending(interface->tx_ring[i]))
+                               break;
+
+               /* if all the queues are drained, we can break now */
+               if (i == interface->num_tx_queues)
+                       break;
+       }
+
+       if (count >= TX_DMA_DRAIN_RETRIES)
+               dev_err(&interface->pdev->dev,
+                       "Tx queues failed to drain after %d tries. Tx DMA is probably hung.\n",
+                       count);
+skip_tx_dma_drain:
        /* Disable DMA engine for Tx/Rx */
        err = hw->mac.ops.stop_hw(hw);
-       if (err)
+       if (err == FM10K_ERR_REQUESTS_PENDING)
+               dev_err(&interface->pdev->dev,
+                       "due to pending requests hw was not shut down gracefully\n");
+       else if (err)
                dev_err(&interface->pdev->dev, "stop_hw failed: %d\n", err);
 
        /* free any buffers still on the rings */
@@ -1750,6 +1843,7 @@ static int fm10k_sw_init(struct fm10k_intfc *interface,
 
        /* Start off interface as being down */
        set_bit(__FM10K_DOWN, &interface->state);
+       set_bit(__FM10K_UPDATING_STATS, &interface->state);
 
        return 0;
 }
@@ -2033,6 +2127,48 @@ static void fm10k_remove(struct pci_dev *pdev)
        pci_disable_device(pdev);
 }
 
+static void fm10k_prepare_suspend(struct fm10k_intfc *interface)
+{
+       /* the watchdog task reads from registers, which might appear like
+        * a surprise remove if the PCIe device is disabled while we're
+        * stopped. We stop the watchdog task until after we resume software
+        * activity.
+        */
+       set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
+       cancel_work_sync(&interface->service_task);
+
+       fm10k_prepare_for_reset(interface);
+}
+
+static int fm10k_handle_resume(struct fm10k_intfc *interface)
+{
+       struct fm10k_hw *hw = &interface->hw;
+       int err;
+
+       /* reset statistics starting values */
+       hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
+
+       err = fm10k_handle_reset(interface);
+       if (err)
+               return err;
+
+       /* assume host is not ready, to prevent race with watchdog in case we
+        * actually don't have connection to the switch
+        */
+       interface->host_ready = false;
+       fm10k_watchdog_host_not_ready(interface);
+
+       /* force link to stay down for a second to prevent link flutter */
+       interface->link_down_event = jiffies + (HZ);
+       set_bit(__FM10K_LINK_DOWN, &interface->state);
+
+       /* clear the service task disable bit to allow service task to start */
+       clear_bit(__FM10K_SERVICE_DISABLE, &interface->state);
+       fm10k_service_event_schedule(interface);
+
+       return err;
+}
+
 #ifdef CONFIG_PM
 /**
  * fm10k_resume - Restore device to pre-sleep state
@@ -2069,60 +2205,13 @@ static int fm10k_resume(struct pci_dev *pdev)
        /* refresh hw_addr in case it was dropped */
        hw->hw_addr = interface->uc_addr;
 
-       /* reset hardware to known state */
-       err = hw->mac.ops.init_hw(&interface->hw);
-       if (err) {
-               dev_err(&pdev->dev, "init_hw failed: %d\n", err);
-               return err;
-       }
-
-       /* reset statistics starting values */
-       hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
-
-       rtnl_lock();
-
-       err = fm10k_init_queueing_scheme(interface);
-       if (err)
-               goto err_queueing_scheme;
-
-       err = fm10k_mbx_request_irq(interface);
-       if (err)
-               goto err_mbx_irq;
-
-       err = fm10k_hw_ready(interface);
+       err = fm10k_handle_resume(interface);
        if (err)
-               goto err_open;
-
-       err = netif_running(netdev) ? fm10k_open(netdev) : 0;
-       if (err)
-               goto err_open;
-
-       rtnl_unlock();
-
-       /* assume host is not ready, to prevent race with watchdog in case we
-        * actually don't have connection to the switch
-        */
-       interface->host_ready = false;
-       fm10k_watchdog_host_not_ready(interface);
-
-       /* clear the service task disable bit to allow service task to start */
-       clear_bit(__FM10K_SERVICE_DISABLE, &interface->state);
-       fm10k_service_event_schedule(interface);
-
-       /* restore SR-IOV interface */
-       fm10k_iov_resume(pdev);
+               return err;
 
        netif_device_attach(netdev);
 
        return 0;
-err_open:
-       fm10k_mbx_free_irq(interface);
-err_mbx_irq:
-       fm10k_clear_queueing_scheme(interface);
-err_queueing_scheme:
-       rtnl_unlock();
-
-       return err;
 }
 
 /**
@@ -2142,27 +2231,7 @@ static int fm10k_suspend(struct pci_dev *pdev,
 
        netif_device_detach(netdev);
 
-       fm10k_iov_suspend(pdev);
-
-       /* the watchdog tasks may read registers, which will appear like a
-        * surprise-remove event once the PCI device is disabled. This will
-        * cause us to close the netdevice, so we don't retain the open/closed
-        * state post-resume. Prevent this by disabling the service task while
-        * suspended, until we actually resume.
-        */
-       set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
-       cancel_work_sync(&interface->service_task);
-
-       rtnl_lock();
-
-       if (netif_running(netdev))
-               fm10k_close(netdev);
-
-       fm10k_mbx_free_irq(interface);
-
-       fm10k_clear_queueing_scheme(interface);
-
-       rtnl_unlock();
+       fm10k_prepare_suspend(interface);
 
        err = pci_save_state(pdev);
        if (err)
@@ -2195,17 +2264,7 @@ static pci_ers_result_t fm10k_io_error_detected(struct pci_dev *pdev,
        if (state == pci_channel_io_perm_failure)
                return PCI_ERS_RESULT_DISCONNECT;
 
-       rtnl_lock();
-
-       if (netif_running(netdev))
-               fm10k_close(netdev);
-
-       fm10k_mbx_free_irq(interface);
-
-       /* free interrupts */
-       fm10k_clear_queueing_scheme(interface);
-
-       rtnl_unlock();
+       fm10k_prepare_suspend(interface);
 
        /* Request a slot reset. */
        return PCI_ERS_RESULT_NEED_RESET;
@@ -2219,7 +2278,6 @@ static pci_ers_result_t fm10k_io_error_detected(struct pci_dev *pdev,
  */
 static pci_ers_result_t fm10k_io_slot_reset(struct pci_dev *pdev)
 {
-       struct fm10k_intfc *interface = pci_get_drvdata(pdev);
        pci_ers_result_t result;
 
        if (pci_enable_device_mem(pdev)) {
@@ -2237,12 +2295,6 @@ static pci_ers_result_t fm10k_io_slot_reset(struct pci_dev *pdev)
 
                pci_wake_from_d3(pdev, false);
 
-               /* refresh hw_addr in case it was dropped */
-               interface->hw.hw_addr = interface->uc_addr;
-
-               interface->flags |= FM10K_FLAG_RESET_REQUESTED;
-               fm10k_service_event_schedule(interface);
-
                result = PCI_ERS_RESULT_RECOVERED;
        }
 
@@ -2262,50 +2314,54 @@ static void fm10k_io_resume(struct pci_dev *pdev)
 {
        struct fm10k_intfc *interface = pci_get_drvdata(pdev);
        struct net_device *netdev = interface->netdev;
-       struct fm10k_hw *hw = &interface->hw;
-       int err = 0;
-
-       /* reset hardware to known state */
-       err = hw->mac.ops.init_hw(&interface->hw);
-       if (err) {
-               dev_err(&pdev->dev, "init_hw failed: %d\n", err);
-               return;
-       }
-
-       /* reset statistics starting values */
-       hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
-
-       rtnl_lock();
+       int err;
 
-       err = fm10k_init_queueing_scheme(interface);
-       if (err) {
-               dev_err(&interface->pdev->dev,
-                       "init_queueing_scheme failed: %d\n", err);
-               goto unlock;
-       }
+       err = fm10k_handle_resume(interface);
 
-       /* reassociate interrupts */
-       fm10k_mbx_request_irq(interface);
+       if (err)
+               dev_warn(&pdev->dev,
+                        "fm10k_io_resume failed: %d\n", err);
+       else
+               netif_device_attach(netdev);
+}
 
-       rtnl_lock();
-       if (netif_running(netdev))
-               err = fm10k_open(netdev);
-       rtnl_unlock();
+/**
+ * fm10k_io_reset_notify - called when PCI function is reset
+ * @pdev: Pointer to PCI device
+ *
+ * This callback is called when the PCI function is reset such as from
+ * /sys/class/net/<enpX>/device/reset or similar. When prepare is true, it
+ * means we should prepare for a function reset. If prepare is false, it means
+ * the function reset just occurred.
+ */
+static void fm10k_io_reset_notify(struct pci_dev *pdev, bool prepare)
+{
+       struct fm10k_intfc *interface = pci_get_drvdata(pdev);
+       int err = 0;
 
-       /* final check of hardware state before registering the interface */
-       err = err ? : fm10k_hw_ready(interface);
+       if (prepare) {
+               /* warn incase we have any active VF devices */
+               if (pci_num_vf(pdev))
+                       dev_warn(&pdev->dev,
+                                "PCIe FLR may cause issues for any active VF devices\n");
 
-       if (!err)
-               netif_device_attach(netdev);
+               fm10k_prepare_suspend(interface);
+       } else {
+               err = fm10k_handle_resume(interface);
+       }
 
-unlock:
-       rtnl_unlock();
+       if (err) {
+               dev_warn(&pdev->dev,
+                        "fm10k_io_reset_notify failed: %d\n", err);
+               netif_device_detach(interface->netdev);
+       }
 }
 
 static const struct pci_error_handlers fm10k_err_handler = {
        .error_detected = fm10k_io_error_detected,
        .slot_reset = fm10k_io_slot_reset,
        .resume = fm10k_io_resume,
+       .reset_notify = fm10k_io_reset_notify,
 };
 
 static struct pci_driver fm10k_driver = {
index dc75507..682299d 100644 (file)
@@ -51,34 +51,37 @@ static s32 fm10k_reset_hw_pf(struct fm10k_hw *hw)
 
        /* shut down all rings */
        err = fm10k_disable_queues_generic(hw, FM10K_MAX_QUEUES);
-       if (err)
+       if (err == FM10K_ERR_REQUESTS_PENDING) {
+               hw->mac.reset_while_pending++;
+               goto force_reset;
+       } else if (err) {
                return err;
+       }
 
        /* Verify that DMA is no longer active */
        reg = fm10k_read_reg(hw, FM10K_DMA_CTRL);
        if (reg & (FM10K_DMA_CTRL_TX_ACTIVE | FM10K_DMA_CTRL_RX_ACTIVE))
                return FM10K_ERR_DMA_PENDING;
 
-       /* verify the switch is ready for reset */
-       reg = fm10k_read_reg(hw, FM10K_DMA_CTRL2);
-       if (!(reg & FM10K_DMA_CTRL2_SWITCH_READY))
-               goto out;
-
+force_reset:
        /* Inititate data path reset */
-       reg |= FM10K_DMA_CTRL_DATAPATH_RESET;
+       reg = FM10K_DMA_CTRL_DATAPATH_RESET;
        fm10k_write_reg(hw, FM10K_DMA_CTRL, reg);
 
        /* Flush write and allow 100us for reset to complete */
        fm10k_write_flush(hw);
        udelay(FM10K_RESET_TIMEOUT);
 
+       /* Reset mailbox global interrupts */
+       reg = FM10K_MBX_GLOBAL_REQ_INTERRUPT | FM10K_MBX_GLOBAL_ACK_INTERRUPT;
+       fm10k_write_reg(hw, FM10K_GMBX, reg);
+
        /* Verify we made it out of reset */
        reg = fm10k_read_reg(hw, FM10K_IP);
        if (!(reg & FM10K_IP_NOTINRESET))
-               err = FM10K_ERR_RESET_FAILED;
+               return FM10K_ERR_RESET_FAILED;
 
-out:
-       return err;
+       return 0;
 }
 
 /**
@@ -1619,25 +1622,15 @@ static s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw)
  **/
 static s32 fm10k_get_host_state_pf(struct fm10k_hw *hw, bool *switch_ready)
 {
-       s32 ret_val = 0;
        u32 dma_ctrl2;
 
        /* verify the switch is ready for interaction */
        dma_ctrl2 = fm10k_read_reg(hw, FM10K_DMA_CTRL2);
        if (!(dma_ctrl2 & FM10K_DMA_CTRL2_SWITCH_READY))
-               goto out;
+               return 0;
 
        /* retrieve generic host state info */
-       ret_val = fm10k_get_host_state_generic(hw, switch_ready);
-       if (ret_val)
-               goto out;
-
-       /* interface cannot receive traffic without logical ports */
-       if (hw->mac.dglort_map == FM10K_DGLORTMAP_NONE)
-               ret_val = fm10k_request_lport_map_pf(hw);
-
-out:
-       return ret_val;
+       return fm10k_get_host_state_generic(hw, switch_ready);
 }
 
 /* This structure defines the attibutes to be parsed below */
@@ -1813,6 +1806,7 @@ static const struct fm10k_mac_ops mac_ops_pf = {
        .set_dma_mask           = fm10k_set_dma_mask_pf,
        .get_fault              = fm10k_get_fault_pf,
        .get_host_state         = fm10k_get_host_state_pf,
+       .request_lport_map      = fm10k_request_lport_map_pf,
 };
 
 static const struct fm10k_iov_ops iov_ops_pf = {
index b8bc061..f4e75c4 100644 (file)
@@ -526,6 +526,7 @@ struct fm10k_mac_ops {
        s32 (*stop_hw)(struct fm10k_hw *);
        s32 (*get_bus_info)(struct fm10k_hw *);
        s32 (*get_host_state)(struct fm10k_hw *, bool *);
+       s32 (*request_lport_map)(struct fm10k_hw *);
        s32 (*update_vlan)(struct fm10k_hw *, u32, u8, bool);
        s32 (*read_mac_addr)(struct fm10k_hw *);
        s32 (*update_uc_addr)(struct fm10k_hw *, u16, const u8 *,
@@ -562,6 +563,7 @@ struct fm10k_mac_info {
        bool tx_ready;
        u32 dglort_map;
        u8 itr_scale;
+       u64 reset_while_pending;
 };
 
 struct fm10k_swapi_table_info {
index 3b06685..337ba65 100644 (file)
@@ -34,7 +34,7 @@ static s32 fm10k_stop_hw_vf(struct fm10k_hw *hw)
 
        /* we need to disable the queues before taking further steps */
        err = fm10k_stop_hw_generic(hw);
-       if (err)
+       if (err && err != FM10K_ERR_REQUESTS_PENDING)
                return err;
 
        /* If permanent address is set then we need to restore it */
@@ -67,7 +67,7 @@ static s32 fm10k_stop_hw_vf(struct fm10k_hw *hw)
                fm10k_write_reg(hw, FM10K_TDLEN(i), tdlen);
        }
 
-       return 0;
+       return err;
 }
 
 /**
@@ -83,7 +83,9 @@ static s32 fm10k_reset_hw_vf(struct fm10k_hw *hw)
 
        /* shut down queues we own and reset DMA configuration */
        err = fm10k_stop_hw_vf(hw);
-       if (err)
+       if (err == FM10K_ERR_REQUESTS_PENDING)
+               hw->mac.reset_while_pending++;
+       else if (err)
                return err;
 
        /* Inititate VF reset */
@@ -96,9 +98,9 @@ static s32 fm10k_reset_hw_vf(struct fm10k_hw *hw)
        /* Clear reset bit and verify it was cleared */
        fm10k_write_reg(hw, FM10K_VFCTRL, 0);
        if (fm10k_read_reg(hw, FM10K_VFCTRL) & FM10K_VFCTRL_RST)
-               err = FM10K_ERR_RESET_FAILED;
+               return FM10K_ERR_RESET_FAILED;
 
-       return err;
+       return 0;
 }
 
 /**
index 2b11405..7e00cbd 100644 (file)
@@ -1339,6 +1339,13 @@ struct i40e_mac_filter *i40e_add_filter(struct i40e_vsi *vsi,
        if (!vsi || !macaddr)
                return NULL;
 
+       /* Do not allow broadcast filter to be added since broadcast filter
+        * is added as part of add VSI for any newly created VSI except
+        * FDIR VSI
+        */
+       if (is_broadcast_ether_addr(macaddr))
+               return NULL;
+
        f = i40e_find_filter(vsi, macaddr, vlan, is_vf, is_netdev);
        if (!f) {
                f = kzalloc(sizeof(*f), GFP_ATOMIC);
@@ -7740,10 +7747,11 @@ static int i40e_init_msix(struct i40e_pf *pf)
  * i40e_vsi_alloc_q_vector - Allocate memory for a single interrupt vector
  * @vsi: the VSI being configured
  * @v_idx: index of the vector in the vsi struct
+ * @cpu: cpu to be used on affinity_mask
  *
  * We allocate one q_vector.  If allocation fails we return -ENOMEM.
  **/
-static int i40e_vsi_alloc_q_vector(struct i40e_vsi *vsi, int v_idx)
+static int i40e_vsi_alloc_q_vector(struct i40e_vsi *vsi, int v_idx, int cpu)
 {
        struct i40e_q_vector *q_vector;
 
@@ -7754,7 +7762,8 @@ static int i40e_vsi_alloc_q_vector(struct i40e_vsi *vsi, int v_idx)
 
        q_vector->vsi = vsi;
        q_vector->v_idx = v_idx;
-       cpumask_set_cpu(v_idx, &q_vector->affinity_mask);
+       cpumask_set_cpu(cpu, &q_vector->affinity_mask);
+
        if (vsi->netdev)
                netif_napi_add(vsi->netdev, &q_vector->napi,
                               i40e_napi_poll, NAPI_POLL_WEIGHT);
@@ -7778,8 +7787,7 @@ static int i40e_vsi_alloc_q_vector(struct i40e_vsi *vsi, int v_idx)
 static int i40e_vsi_alloc_q_vectors(struct i40e_vsi *vsi)
 {
        struct i40e_pf *pf = vsi->back;
-       int v_idx, num_q_vectors;
-       int err;
+       int err, v_idx, num_q_vectors, current_cpu;
 
        /* if not MSIX, give the one vector only to the LAN VSI */
        if (pf->flags & I40E_FLAG_MSIX_ENABLED)
@@ -7789,10 +7797,15 @@ static int i40e_vsi_alloc_q_vectors(struct i40e_vsi *vsi)
        else
                return -EINVAL;
 
+       current_cpu = cpumask_first(cpu_online_mask);
+
        for (v_idx = 0; v_idx < num_q_vectors; v_idx++) {
-               err = i40e_vsi_alloc_q_vector(vsi, v_idx);
+               err = i40e_vsi_alloc_q_vector(vsi, v_idx, current_cpu);
                if (err)
                        goto err_out;
+               current_cpu = cpumask_next(current_cpu, cpu_online_mask);
+               if (unlikely(current_cpu >= nr_cpu_ids))
+                       current_cpu = cpumask_first(cpu_online_mask);
        }
 
        return 0;
@@ -9174,6 +9187,7 @@ int i40e_is_vsi_uplink_mode_veb(struct i40e_vsi *vsi)
 static int i40e_add_vsi(struct i40e_vsi *vsi)
 {
        int ret = -ENODEV;
+       i40e_status aq_ret = 0;
        u8 laa_macaddr[ETH_ALEN];
        bool found_laa_mac_filter = false;
        struct i40e_pf *pf = vsi->back;
@@ -9363,6 +9377,18 @@ static int i40e_add_vsi(struct i40e_vsi *vsi)
                vsi->seid = ctxt.seid;
                vsi->id = ctxt.vsi_number;
        }
+       /* Except FDIR VSI, for all othet VSI set the broadcast filter */
+       if (vsi->type != I40E_VSI_FDIR) {
+               aq_ret = i40e_aq_set_vsi_broadcast(hw, vsi->seid, true, NULL);
+               if (aq_ret) {
+                       ret = i40e_aq_rc_to_posix(aq_ret,
+                                                 hw->aq.asq_last_status);
+                       dev_info(&pf->pdev->dev,
+                                "set brdcast promisc failed, err %s, aq_err %s\n",
+                                i40e_stat_str(hw, aq_ret),
+                                i40e_aq_str(hw, hw->aq.asq_last_status));
+               }
+       }
 
        spin_lock_bh(&vsi->mac_filter_list_lock);
        /* If macvlan filters already exist, force them to get loaded */
index 55f151f..a8868e1 100644 (file)
@@ -1280,8 +1280,8 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
                                    union i40e_rx_desc *rx_desc)
 {
        struct i40e_rx_ptype_decoded decoded;
-       bool ipv4, ipv6, tunnel = false;
        u32 rx_error, rx_status;
+       bool ipv4, ipv6;
        u8 ptype;
        u64 qword;
 
@@ -1336,19 +1336,23 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
        if (rx_error & BIT(I40E_RX_DESC_ERROR_PPRS_SHIFT))
                return;
 
-       /* The hardware supported by this driver does not validate outer
-        * checksums for tunneled VXLAN or GENEVE frames.  I don't agree
-        * with it but the specification states that you "MAY validate", it
-        * doesn't make it a hard requirement so if we have validated the
-        * inner checksum report CHECKSUM_UNNECESSARY.
+       /* If there is an outer header present that might contain a checksum
+        * we need to bump the checksum level by 1 to reflect the fact that
+        * we are indicating we validated the inner checksum.
         */
-       if (decoded.inner_prot & (I40E_RX_PTYPE_INNER_PROT_TCP |
-                                 I40E_RX_PTYPE_INNER_PROT_UDP |
-                                 I40E_RX_PTYPE_INNER_PROT_SCTP))
-               tunnel = true;
-
-       skb->ip_summed = CHECKSUM_UNNECESSARY;
-       skb->csum_level = tunnel ? 1 : 0;
+       if (decoded.tunnel_type >= I40E_RX_PTYPE_TUNNEL_IP_GRENAT)
+               skb->csum_level = 1;
+
+       /* Only report checksum unnecessary for TCP, UDP, or SCTP */
+       switch (decoded.inner_prot) {
+       case I40E_RX_PTYPE_INNER_PROT_TCP:
+       case I40E_RX_PTYPE_INNER_PROT_UDP:
+       case I40E_RX_PTYPE_INNER_PROT_SCTP:
+               skb->ip_summed = CHECKSUM_UNNECESSARY;
+               /* fall though */
+       default:
+               break;
+       }
 
        return;
 
index be99189..79d99cd 100644 (file)
@@ -752,8 +752,8 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
                                    union i40e_rx_desc *rx_desc)
 {
        struct i40e_rx_ptype_decoded decoded;
-       bool ipv4, ipv6, tunnel = false;
        u32 rx_error, rx_status;
+       bool ipv4, ipv6;
        u8 ptype;
        u64 qword;
 
@@ -808,19 +808,23 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
        if (rx_error & BIT(I40E_RX_DESC_ERROR_PPRS_SHIFT))
                return;
 
-       /* The hardware supported by this driver does not validate outer
-        * checksums for tunneled VXLAN or GENEVE frames.  I don't agree
-        * with it but the specification states that you "MAY validate", it
-        * doesn't make it a hard requirement so if we have validated the
-        * inner checksum report CHECKSUM_UNNECESSARY.
+       /* If there is an outer header present that might contain a checksum
+        * we need to bump the checksum level by 1 to reflect the fact that
+        * we are indicating we validated the inner checksum.
         */
-       if (decoded.inner_prot & (I40E_RX_PTYPE_INNER_PROT_TCP |
-                                 I40E_RX_PTYPE_INNER_PROT_UDP |
-                                 I40E_RX_PTYPE_INNER_PROT_SCTP))
-               tunnel = true;
-
-       skb->ip_summed = CHECKSUM_UNNECESSARY;
-       skb->csum_level = tunnel ? 1 : 0;
+       if (decoded.tunnel_type >= I40E_RX_PTYPE_TUNNEL_IP_GRENAT)
+               skb->csum_level = 1;
+
+       /* Only report checksum unnecessary for TCP, UDP, or SCTP */
+       switch (decoded.inner_prot) {
+       case I40E_RX_PTYPE_INNER_PROT_TCP:
+       case I40E_RX_PTYPE_INNER_PROT_UDP:
+       case I40E_RX_PTYPE_INNER_PROT_SCTP:
+               skb->ip_summed = CHECKSUM_UNNECESSARY;
+               /* fall though */
+       default:
+               break;
+       }
 
        return;
 
index fd5a761..fdbea54 100644 (file)
@@ -2887,7 +2887,7 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
        if (!test_bit(__IXGBE_DOWN, &adapter->state))
                ixgbe_irq_enable_queues(adapter, BIT_ULL(q_vector->v_idx));
 
-       return 0;
+       return min(work_done, budget - 1);
 }
 
 /**
@@ -8851,9 +8851,7 @@ static int ixgbe_set_features(struct net_device *netdev,
 /**
  * ixgbe_add_vxlan_port - Get notifications about VXLAN ports that come up
  * @dev: The port's netdev
- * @sa_family: Socket Family that VXLAN is notifiying us about
- * @port: New UDP port number that VXLAN started listening to
- * @type: Enumerated type specifying UDP tunnel type
+ * @ti: Tunnel endpoint information
  **/
 static void ixgbe_add_vxlan_port(struct net_device *dev,
                                 struct udp_tunnel_info *ti)
@@ -8888,9 +8886,7 @@ static void ixgbe_add_vxlan_port(struct net_device *dev,
 /**
  * ixgbe_del_vxlan_port - Get notifications about VXLAN ports that go away
  * @dev: The port's netdev
- * @sa_family: Socket Family that VXLAN is notifying us about
- * @port: UDP port number that VXLAN stopped listening to
- * @type: Enumerated type specifying UDP tunnel type
+ * @ti: Tunnel endpoint information
  **/
 static void ixgbe_del_vxlan_port(struct net_device *dev,
                                 struct udp_tunnel_info *ti)
index d5d263b..f92018b 100644 (file)
 /* Various constants */
 
 /* Coalescing */
-#define MVNETA_TXDONE_COAL_PKTS                1
+#define MVNETA_TXDONE_COAL_PKTS                0       /* interrupt per packet */
 #define MVNETA_RX_COAL_PKTS            32
 #define MVNETA_RX_COAL_USEC            100
 
index 54d5154..aeeb2e7 100644 (file)
@@ -247,7 +247,6 @@ struct pxa168_eth_private {
         */
        struct timer_list timeout;
        struct mii_bus *smi_bus;
-       struct phy_device *phy;
 
        /* clock */
        struct clk *clk;
@@ -275,8 +274,8 @@ enum hash_table_entry {
        HASH_ENTRY_RECEIVE_DISCARD_BIT = 2
 };
 
-static int pxa168_get_settings(struct net_device *dev, struct ethtool_cmd *cmd);
-static int pxa168_set_settings(struct net_device *dev, struct ethtool_cmd *cmd);
+static int pxa168_get_link_ksettings(struct net_device *dev,
+                                    struct ethtool_link_ksettings *cmd);
 static int pxa168_init_hw(struct pxa168_eth_private *pep);
 static int pxa168_init_phy(struct net_device *dev);
 static void eth_port_reset(struct net_device *dev);
@@ -644,7 +643,7 @@ static void eth_port_start(struct net_device *dev)
        struct pxa168_eth_private *pep = netdev_priv(dev);
        int tx_curr_desc, rx_curr_desc;
 
-       phy_start(pep->phy);
+       phy_start(dev->phydev);
 
        /* Assignment of Tx CTRP of given queue */
        tx_curr_desc = pep->tx_curr_desc_q;
@@ -700,7 +699,7 @@ static void eth_port_reset(struct net_device *dev)
        val &= ~PCR_EN;
        wrl(pep, PORT_CONFIG, val);
 
-       phy_stop(pep->phy);
+       phy_stop(dev->phydev);
 }
 
 /*
@@ -943,7 +942,7 @@ static int set_port_config_ext(struct pxa168_eth_private *pep)
 static void pxa168_eth_adjust_link(struct net_device *dev)
 {
        struct pxa168_eth_private *pep = netdev_priv(dev);
-       struct phy_device *phy = pep->phy;
+       struct phy_device *phy = dev->phydev;
        u32 cfg, cfg_o = rdl(pep, PORT_CONFIG);
        u32 cfgext, cfgext_o = rdl(pep, PORT_CONFIG_EXT);
 
@@ -972,35 +971,37 @@ static void pxa168_eth_adjust_link(struct net_device *dev)
 static int pxa168_init_phy(struct net_device *dev)
 {
        struct pxa168_eth_private *pep = netdev_priv(dev);
-       struct ethtool_cmd cmd;
+       struct ethtool_link_ksettings cmd;
+       struct phy_device *phy = NULL;
        int err;
 
-       if (pep->phy)
+       if (dev->phydev)
                return 0;
 
-       pep->phy = mdiobus_scan(pep->smi_bus, pep->phy_addr);
-       if (IS_ERR(pep->phy))
-               return PTR_ERR(pep->phy);
+       phy = mdiobus_scan(pep->smi_bus, pep->phy_addr);
+       if (IS_ERR(phy))
+               return PTR_ERR(phy);
 
-       err = phy_connect_direct(dev, pep->phy, pxa168_eth_adjust_link,
+       err = phy_connect_direct(dev, phy, pxa168_eth_adjust_link,
                                 pep->phy_intf);
        if (err)
                return err;
 
-       err = pxa168_get_settings(dev, &cmd);
+       err = pxa168_get_link_ksettings(dev, &cmd);
        if (err)
                return err;
 
-       cmd.phy_address = pep->phy_addr;
-       cmd.speed = pep->phy_speed;
-       cmd.duplex = pep->phy_duplex;
-       cmd.advertising = PHY_BASIC_FEATURES;
-       cmd.autoneg = AUTONEG_ENABLE;
+       cmd.base.phy_address = pep->phy_addr;
+       cmd.base.speed = pep->phy_speed;
+       cmd.base.duplex = pep->phy_duplex;
+       ethtool_convert_legacy_u32_to_link_mode(cmd.link_modes.advertising,
+                                               PHY_BASIC_FEATURES);
+       cmd.base.autoneg = AUTONEG_ENABLE;
 
-       if (cmd.speed != 0)
-               cmd.autoneg = AUTONEG_DISABLE;
+       if (cmd.base.speed != 0)
+               cmd.base.autoneg = AUTONEG_DISABLE;
 
-       return pxa168_set_settings(dev, &cmd);
+       return phy_ethtool_set_link_ksettings(dev, &cmd);
 }
 
 static int pxa168_init_hw(struct pxa168_eth_private *pep)
@@ -1366,32 +1367,24 @@ static int pxa168_smi_write(struct mii_bus *bus, int phy_addr, int regnum,
 static int pxa168_eth_do_ioctl(struct net_device *dev, struct ifreq *ifr,
                               int cmd)
 {
-       struct pxa168_eth_private *pep = netdev_priv(dev);
-       if (pep->phy != NULL)
-               return phy_mii_ioctl(pep->phy, ifr, cmd);
+       if (dev->phydev != NULL)
+               return phy_mii_ioctl(dev->phydev, ifr, cmd);
 
        return -EOPNOTSUPP;
 }
 
-static int pxa168_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+static int pxa168_get_link_ksettings(struct net_device *dev,
+                                    struct ethtool_link_ksettings *cmd)
 {
-       struct pxa168_eth_private *pep = netdev_priv(dev);
        int err;
 
-       err = phy_read_status(pep->phy);
+       err = phy_read_status(dev->phydev);
        if (err == 0)
-               err = phy_ethtool_gset(pep->phy, cmd);
+               err = phy_ethtool_ksettings_get(dev->phydev, cmd);
 
        return err;
 }
 
-static int pxa168_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct pxa168_eth_private *pep = netdev_priv(dev);
-
-       return phy_ethtool_sset(pep->phy, cmd);
-}
-
 static void pxa168_get_drvinfo(struct net_device *dev,
                               struct ethtool_drvinfo *info)
 {
@@ -1402,11 +1395,11 @@ static void pxa168_get_drvinfo(struct net_device *dev,
 }
 
 static const struct ethtool_ops pxa168_ethtool_ops = {
-       .get_settings   = pxa168_get_settings,
-       .set_settings   = pxa168_set_settings,
        .get_drvinfo    = pxa168_get_drvinfo,
        .get_link       = ethtool_op_get_link,
        .get_ts_info    = ethtool_op_get_ts_info,
+       .get_link_ksettings = pxa168_get_link_ksettings,
+       .set_link_ksettings = phy_ethtool_set_link_ksettings,
 };
 
 static const struct net_device_ops pxa168_eth_netdev_ops = {
@@ -1569,8 +1562,8 @@ static int pxa168_eth_remove(struct platform_device *pdev)
                                  pep->htpr, pep->htpr_dma);
                pep->htpr = NULL;
        }
-       if (pep->phy)
-               phy_disconnect(pep->phy);
+       if (dev->phydev)
+               phy_disconnect(dev->phydev);
        if (pep->clk) {
                clk_disable_unprepare(pep->clk);
        }
index 760f3d7..b57ae3a 100644 (file)
@@ -76,8 +76,8 @@ static int mtk_mdio_busy_wait(struct mtk_eth *eth)
        return -1;
 }
 
-u32 _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr,
-                   u32 phy_register, u32 write_data)
+static u32 _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr,
+                          u32 phy_register, u32 write_data)
 {
        if (mtk_mdio_busy_wait(eth))
                return -1;
@@ -95,7 +95,7 @@ u32 _mtk_mdio_write(struct mtk_eth *eth, u32 phy_addr,
        return 0;
 }
 
-u32 _mtk_mdio_read(struct mtk_eth *eth, int phy_addr, int phy_reg)
+static u32 _mtk_mdio_read(struct mtk_eth *eth, int phy_addr, int phy_reg)
 {
        u32 d;
 
index 51a2e82..bdda17d 100644 (file)
@@ -1042,6 +1042,8 @@ static int mlx4_en_set_ringparam(struct net_device *dev,
 {
        struct mlx4_en_priv *priv = netdev_priv(dev);
        struct mlx4_en_dev *mdev = priv->mdev;
+       struct mlx4_en_port_profile new_prof;
+       struct mlx4_en_priv *tmp;
        u32 rx_size, tx_size;
        int port_up = 0;
        int err = 0;
@@ -1061,22 +1063,25 @@ static int mlx4_en_set_ringparam(struct net_device *dev,
            tx_size == priv->tx_ring[0]->size)
                return 0;
 
+       tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);
+       if (!tmp)
+               return -ENOMEM;
+
        mutex_lock(&mdev->state_lock);
+       memcpy(&new_prof, priv->prof, sizeof(struct mlx4_en_port_profile));
+       new_prof.tx_ring_size = tx_size;
+       new_prof.rx_ring_size = rx_size;
+       err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof);
+       if (err)
+               goto out;
+
        if (priv->port_up) {
                port_up = 1;
                mlx4_en_stop_port(dev, 1);
        }
 
-       mlx4_en_free_resources(priv);
-
-       priv->prof->tx_ring_size = tx_size;
-       priv->prof->rx_ring_size = rx_size;
+       mlx4_en_safe_replace_resources(priv, tmp);
 
-       err = mlx4_en_alloc_resources(priv);
-       if (err) {
-               en_err(priv, "Failed reallocating port resources\n");
-               goto out;
-       }
        if (port_up) {
                err = mlx4_en_start_port(dev);
                if (err)
@@ -1084,8 +1089,8 @@ static int mlx4_en_set_ringparam(struct net_device *dev,
        }
 
        err = mlx4_en_moderation_update(priv);
-
 out:
+       kfree(tmp);
        mutex_unlock(&mdev->state_lock);
        return err;
 }
@@ -1713,6 +1718,8 @@ static int mlx4_en_set_channels(struct net_device *dev,
 {
        struct mlx4_en_priv *priv = netdev_priv(dev);
        struct mlx4_en_dev *mdev = priv->mdev;
+       struct mlx4_en_port_profile new_prof;
+       struct mlx4_en_priv *tmp;
        int port_up = 0;
        int err = 0;
 
@@ -1722,25 +1729,35 @@ static int mlx4_en_set_channels(struct net_device *dev,
            !channel->tx_count || !channel->rx_count)
                return -EINVAL;
 
-       mutex_lock(&mdev->state_lock);
-       if (priv->port_up) {
-               port_up = 1;
-               mlx4_en_stop_port(dev, 1);
+       if (channel->tx_count * MLX4_EN_NUM_UP <= priv->xdp_ring_num) {
+               en_err(priv, "Minimum %d tx channels required with XDP on\n",
+                      priv->xdp_ring_num / MLX4_EN_NUM_UP + 1);
+               return -EINVAL;
        }
 
-       mlx4_en_free_resources(priv);
+       tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);
+       if (!tmp)
+               return -ENOMEM;
 
-       priv->num_tx_rings_p_up = channel->tx_count;
-       priv->tx_ring_num = channel->tx_count * MLX4_EN_NUM_UP;
-       priv->rx_ring_num = channel->rx_count;
+       mutex_lock(&mdev->state_lock);
+       memcpy(&new_prof, priv->prof, sizeof(struct mlx4_en_port_profile));
+       new_prof.num_tx_rings_p_up = channel->tx_count;
+       new_prof.tx_ring_num = channel->tx_count * MLX4_EN_NUM_UP;
+       new_prof.rx_ring_num = channel->rx_count;
 
-       err = mlx4_en_alloc_resources(priv);
-       if (err) {
-               en_err(priv, "Failed reallocating port resources\n");
+       err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof);
+       if (err)
                goto out;
+
+       if (priv->port_up) {
+               port_up = 1;
+               mlx4_en_stop_port(dev, 1);
        }
 
-       netif_set_real_num_tx_queues(dev, priv->tx_ring_num);
+       mlx4_en_safe_replace_resources(priv, tmp);
+
+       netif_set_real_num_tx_queues(dev, priv->tx_ring_num -
+                                                       priv->xdp_ring_num);
        netif_set_real_num_rx_queues(dev, priv->rx_ring_num);
 
        if (dev->num_tc)
@@ -1756,8 +1773,8 @@ static int mlx4_en_set_channels(struct net_device *dev,
        }
 
        err = mlx4_en_moderation_update(priv);
-
 out:
+       kfree(tmp);
        mutex_unlock(&mdev->state_lock);
        return err;
 }
index 6083775..4198e9b 100644 (file)
@@ -31,6 +31,7 @@
  *
  */
 
+#include <linux/bpf.h>
 #include <linux/etherdevice.h>
 #include <linux/tcp.h>
 #include <linux/if_vlan.h>
@@ -1521,6 +1522,24 @@ static void mlx4_en_free_affinity_hint(struct mlx4_en_priv *priv, int ring_idx)
        free_cpumask_var(priv->rx_ring[ring_idx]->affinity_mask);
 }
 
+static void mlx4_en_init_recycle_ring(struct mlx4_en_priv *priv,
+                                     int tx_ring_idx)
+{
+       struct mlx4_en_tx_ring *tx_ring = priv->tx_ring[tx_ring_idx];
+       int rr_index;
+
+       rr_index = (priv->xdp_ring_num - priv->tx_ring_num) + tx_ring_idx;
+       if (rr_index >= 0) {
+               tx_ring->free_tx_desc = mlx4_en_recycle_tx_desc;
+               tx_ring->recycle_ring = priv->rx_ring[rr_index];
+               en_dbg(DRV, priv,
+                      "Set tx_ring[%d]->recycle_ring = rx_ring[%d]\n",
+                      tx_ring_idx, rr_index);
+       } else {
+               tx_ring->recycle_ring = NULL;
+       }
+}
+
 int mlx4_en_start_port(struct net_device *dev)
 {
        struct mlx4_en_priv *priv = netdev_priv(dev);
@@ -1643,6 +1662,8 @@ int mlx4_en_start_port(struct net_device *dev)
                }
                tx_ring->tx_queue = netdev_get_tx_queue(dev, i);
 
+               mlx4_en_init_recycle_ring(priv, i);
+
                /* Arm CQ for TX completions */
                mlx4_en_arm_cq(priv, cq);
 
@@ -1964,7 +1985,7 @@ static int mlx4_en_close(struct net_device *dev)
        return 0;
 }
 
-void mlx4_en_free_resources(struct mlx4_en_priv *priv)
+static void mlx4_en_free_resources(struct mlx4_en_priv *priv)
 {
        int i;
 
@@ -1989,7 +2010,7 @@ void mlx4_en_free_resources(struct mlx4_en_priv *priv)
 
 }
 
-int mlx4_en_alloc_resources(struct mlx4_en_priv *priv)
+static int mlx4_en_alloc_resources(struct mlx4_en_priv *priv)
 {
        struct mlx4_en_port_profile *prof = priv->prof;
        int i;
@@ -2054,6 +2075,77 @@ static void mlx4_en_shutdown(struct net_device *dev)
        rtnl_unlock();
 }
 
+static int mlx4_en_copy_priv(struct mlx4_en_priv *dst,
+                            struct mlx4_en_priv *src,
+                            struct mlx4_en_port_profile *prof)
+{
+       memcpy(&dst->hwtstamp_config, &prof->hwtstamp_config,
+              sizeof(dst->hwtstamp_config));
+       dst->num_tx_rings_p_up = src->mdev->profile.num_tx_rings_p_up;
+       dst->tx_ring_num = prof->tx_ring_num;
+       dst->rx_ring_num = prof->rx_ring_num;
+       dst->flags = prof->flags;
+       dst->mdev = src->mdev;
+       dst->port = src->port;
+       dst->dev = src->dev;
+       dst->prof = prof;
+       dst->stride = roundup_pow_of_two(sizeof(struct mlx4_en_rx_desc) +
+                                        DS_SIZE * MLX4_EN_MAX_RX_FRAGS);
+
+       dst->tx_ring = kzalloc(sizeof(struct mlx4_en_tx_ring *) * MAX_TX_RINGS,
+                               GFP_KERNEL);
+       if (!dst->tx_ring)
+               return -ENOMEM;
+
+       dst->tx_cq = kzalloc(sizeof(struct mlx4_en_cq *) * MAX_TX_RINGS,
+                             GFP_KERNEL);
+       if (!dst->tx_cq) {
+               kfree(dst->tx_ring);
+               return -ENOMEM;
+       }
+       return 0;
+}
+
+static void mlx4_en_update_priv(struct mlx4_en_priv *dst,
+                               struct mlx4_en_priv *src)
+{
+       memcpy(dst->rx_ring, src->rx_ring,
+              sizeof(struct mlx4_en_rx_ring *) * src->rx_ring_num);
+       memcpy(dst->rx_cq, src->rx_cq,
+              sizeof(struct mlx4_en_cq *) * src->rx_ring_num);
+       memcpy(&dst->hwtstamp_config, &src->hwtstamp_config,
+              sizeof(dst->hwtstamp_config));
+       dst->tx_ring_num = src->tx_ring_num;
+       dst->rx_ring_num = src->rx_ring_num;
+       dst->tx_ring = src->tx_ring;
+       dst->tx_cq = src->tx_cq;
+       memcpy(dst->prof, src->prof, sizeof(struct mlx4_en_port_profile));
+}
+
+int mlx4_en_try_alloc_resources(struct mlx4_en_priv *priv,
+                               struct mlx4_en_priv *tmp,
+                               struct mlx4_en_port_profile *prof)
+{
+       mlx4_en_copy_priv(tmp, priv, prof);
+
+       if (mlx4_en_alloc_resources(tmp)) {
+               en_warn(priv,
+                       "%s: Resource allocation failed, using previous configuration\n",
+                       __func__);
+               kfree(tmp->tx_ring);
+               kfree(tmp->tx_cq);
+               return -ENOMEM;
+       }
+       return 0;
+}
+
+void mlx4_en_safe_replace_resources(struct mlx4_en_priv *priv,
+                                   struct mlx4_en_priv *tmp)
+{
+       mlx4_en_free_resources(priv);
+       mlx4_en_update_priv(priv, tmp);
+}
+
 void mlx4_en_destroy_netdev(struct net_device *dev)
 {
        struct mlx4_en_priv *priv = netdev_priv(dev);
@@ -2090,6 +2182,10 @@ void mlx4_en_destroy_netdev(struct net_device *dev)
        mdev->upper[priv->port] = NULL;
        mutex_unlock(&mdev->state_lock);
 
+#ifdef CONFIG_RFS_ACCEL
+       mlx4_en_cleanup_filters(priv);
+#endif
+
        mlx4_en_free_resources(priv);
 
        kfree(priv->tx_ring);
@@ -2112,6 +2208,11 @@ static int mlx4_en_change_mtu(struct net_device *dev, int new_mtu)
                en_err(priv, "Bad MTU size:%d.\n", new_mtu);
                return -EPERM;
        }
+       if (priv->xdp_ring_num && MLX4_EN_EFF_MTU(new_mtu) > FRAG_SZ0) {
+               en_err(priv, "MTU size:%d requires frags but XDP running\n",
+                      new_mtu);
+               return -EOPNOTSUPP;
+       }
        dev->mtu = new_mtu;
 
        if (netif_running(dev)) {
@@ -2520,6 +2621,103 @@ static int mlx4_en_set_tx_maxrate(struct net_device *dev, int queue_index, u32 m
        return err;
 }
 
+static int mlx4_xdp_set(struct net_device *dev, struct bpf_prog *prog)
+{
+       struct mlx4_en_priv *priv = netdev_priv(dev);
+       struct mlx4_en_dev *mdev = priv->mdev;
+       struct bpf_prog *old_prog;
+       int xdp_ring_num;
+       int port_up = 0;
+       int err;
+       int i;
+
+       xdp_ring_num = prog ? ALIGN(priv->rx_ring_num, MLX4_EN_NUM_UP) : 0;
+
+       /* No need to reconfigure buffers when simply swapping the
+        * program for a new one.
+        */
+       if (priv->xdp_ring_num == xdp_ring_num) {
+               if (prog) {
+                       prog = bpf_prog_add(prog, priv->rx_ring_num - 1);
+                       if (IS_ERR(prog))
+                               return PTR_ERR(prog);
+               }
+               for (i = 0; i < priv->rx_ring_num; i++) {
+                       /* This xchg is paired with READ_ONCE in the fastpath */
+                       old_prog = xchg(&priv->rx_ring[i]->xdp_prog, prog);
+                       if (old_prog)
+                               bpf_prog_put(old_prog);
+               }
+               return 0;
+       }
+
+       if (priv->num_frags > 1) {
+               en_err(priv, "Cannot set XDP if MTU requires multiple frags\n");
+               return -EOPNOTSUPP;
+       }
+
+       if (priv->tx_ring_num < xdp_ring_num + MLX4_EN_NUM_UP) {
+               en_err(priv,
+                      "Minimum %d tx channels required to run XDP\n",
+                      (xdp_ring_num + MLX4_EN_NUM_UP) / MLX4_EN_NUM_UP);
+               return -EINVAL;
+       }
+
+       if (prog) {
+               prog = bpf_prog_add(prog, priv->rx_ring_num - 1);
+               if (IS_ERR(prog))
+                       return PTR_ERR(prog);
+       }
+
+       mutex_lock(&mdev->state_lock);
+       if (priv->port_up) {
+               port_up = 1;
+               mlx4_en_stop_port(dev, 1);
+       }
+
+       priv->xdp_ring_num = xdp_ring_num;
+       netif_set_real_num_tx_queues(dev, priv->tx_ring_num -
+                                                       priv->xdp_ring_num);
+
+       for (i = 0; i < priv->rx_ring_num; i++) {
+               old_prog = xchg(&priv->rx_ring[i]->xdp_prog, prog);
+               if (old_prog)
+                       bpf_prog_put(old_prog);
+       }
+
+       if (port_up) {
+               err = mlx4_en_start_port(dev);
+               if (err) {
+                       en_err(priv, "Failed starting port %d for XDP change\n",
+                              priv->port);
+                       queue_work(mdev->workqueue, &priv->watchdog_task);
+               }
+       }
+
+       mutex_unlock(&mdev->state_lock);
+       return 0;
+}
+
+static bool mlx4_xdp_attached(struct net_device *dev)
+{
+       struct mlx4_en_priv *priv = netdev_priv(dev);
+
+       return !!priv->xdp_ring_num;
+}
+
+static int mlx4_xdp(struct net_device *dev, struct netdev_xdp *xdp)
+{
+       switch (xdp->command) {
+       case XDP_SETUP_PROG:
+               return mlx4_xdp_set(dev, xdp->prog);
+       case XDP_QUERY_PROG:
+               xdp->prog_attached = mlx4_xdp_attached(dev);
+               return 0;
+       default:
+               return -EINVAL;
+       }
+}
+
 static const struct net_device_ops mlx4_netdev_ops = {
        .ndo_open               = mlx4_en_open,
        .ndo_stop               = mlx4_en_close,
@@ -2548,6 +2746,7 @@ static const struct net_device_ops mlx4_netdev_ops = {
        .ndo_udp_tunnel_del     = mlx4_en_del_vxlan_port,
        .ndo_features_check     = mlx4_en_features_check,
        .ndo_set_tx_maxrate     = mlx4_en_set_tx_maxrate,
+       .ndo_xdp                = mlx4_xdp,
 };
 
 static const struct net_device_ops mlx4_netdev_ops_master = {
@@ -2584,6 +2783,7 @@ static const struct net_device_ops mlx4_netdev_ops_master = {
        .ndo_udp_tunnel_del     = mlx4_en_del_vxlan_port,
        .ndo_features_check     = mlx4_en_features_check,
        .ndo_set_tx_maxrate     = mlx4_en_set_tx_maxrate,
+       .ndo_xdp                = mlx4_xdp,
 };
 
 struct mlx4_en_bond {
@@ -3148,6 +3348,8 @@ int mlx4_en_reset_config(struct net_device *dev,
 {
        struct mlx4_en_priv *priv = netdev_priv(dev);
        struct mlx4_en_dev *mdev = priv->mdev;
+       struct mlx4_en_port_profile new_prof;
+       struct mlx4_en_priv *tmp;
        int port_up = 0;
        int err = 0;
 
@@ -3164,19 +3366,29 @@ int mlx4_en_reset_config(struct net_device *dev,
                return -EINVAL;
        }
 
+       tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);
+       if (!tmp)
+               return -ENOMEM;
+
        mutex_lock(&mdev->state_lock);
+
+       memcpy(&new_prof, priv->prof, sizeof(struct mlx4_en_port_profile));
+       memcpy(&new_prof.hwtstamp_config, &ts_config, sizeof(ts_config));
+
+       err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof);
+       if (err)
+               goto out;
+
        if (priv->port_up) {
                port_up = 1;
                mlx4_en_stop_port(dev, 1);
        }
 
-       mlx4_en_free_resources(priv);
-
        en_warn(priv, "Changing device configuration rx filter(%x) rx vlan(%x)\n",
-               ts_config.rx_filter, !!(features & NETIF_F_HW_VLAN_CTAG_RX));
+               ts_config.rx_filter,
+               !!(features & NETIF_F_HW_VLAN_CTAG_RX));
 
-       priv->hwtstamp_config.tx_type = ts_config.tx_type;
-       priv->hwtstamp_config.rx_filter = ts_config.rx_filter;
+       mlx4_en_safe_replace_resources(priv, tmp);
 
        if (DEV_FEATURE_CHANGED(dev, features, NETIF_F_HW_VLAN_CTAG_RX)) {
                if (features & NETIF_F_HW_VLAN_CTAG_RX)
@@ -3210,11 +3422,6 @@ int mlx4_en_reset_config(struct net_device *dev,
                dev->features &= ~NETIF_F_HW_VLAN_CTAG_RX;
        }
 
-       err = mlx4_en_alloc_resources(priv);
-       if (err) {
-               en_err(priv, "Failed reallocating port resources\n");
-               goto out;
-       }
        if (port_up) {
                err = mlx4_en_start_port(dev);
                if (err)
@@ -3223,6 +3430,8 @@ int mlx4_en_reset_config(struct net_device *dev,
 
 out:
        mutex_unlock(&mdev->state_lock);
-       netdev_features_change(dev);
+       kfree(tmp);
+       if (!err)
+               netdev_features_change(dev);
        return err;
 }
index c1b3a9c..2040dad 100644 (file)
@@ -32,6 +32,7 @@
  */
 
 #include <net/busy_poll.h>
+#include <linux/bpf.h>
 #include <linux/mlx4/cq.h>
 #include <linux/slab.h>
 #include <linux/mlx4/qp.h>
@@ -57,7 +58,7 @@ static int mlx4_alloc_pages(struct mlx4_en_priv *priv,
        struct page *page;
        dma_addr_t dma;
 
-       for (order = MLX4_EN_ALLOC_PREFER_ORDER; ;) {
+       for (order = frag_info->order; ;) {
                gfp_t gfp = _gfp;
 
                if (order)
@@ -70,7 +71,7 @@ static int mlx4_alloc_pages(struct mlx4_en_priv *priv,
                        return -ENOMEM;
        }
        dma = dma_map_page(priv->ddev, page, 0, PAGE_SIZE << order,
-                          PCI_DMA_FROMDEVICE);
+                          frag_info->dma_dir);
        if (dma_mapping_error(priv->ddev, dma)) {
                put_page(page);
                return -ENOMEM;
@@ -124,7 +125,8 @@ out:
        while (i--) {
                if (page_alloc[i].page != ring_alloc[i].page) {
                        dma_unmap_page(priv->ddev, page_alloc[i].dma,
-                               page_alloc[i].page_size, PCI_DMA_FROMDEVICE);
+                               page_alloc[i].page_size,
+                               priv->frag_info[i].dma_dir);
                        page = page_alloc[i].page;
                        /* Revert changes done by mlx4_alloc_pages */
                        page_ref_sub(page, page_alloc[i].page_size /
@@ -145,7 +147,7 @@ static void mlx4_en_free_frag(struct mlx4_en_priv *priv,
 
        if (next_frag_end > frags[i].page_size)
                dma_unmap_page(priv->ddev, frags[i].dma, frags[i].page_size,
-                              PCI_DMA_FROMDEVICE);
+                              frag_info->dma_dir);
 
        if (frags[i].page)
                put_page(frags[i].page);
@@ -176,7 +178,8 @@ out:
 
                page_alloc = &ring->page_alloc[i];
                dma_unmap_page(priv->ddev, page_alloc->dma,
-                              page_alloc->page_size, PCI_DMA_FROMDEVICE);
+                              page_alloc->page_size,
+                              priv->frag_info[i].dma_dir);
                page = page_alloc->page;
                /* Revert changes done by mlx4_alloc_pages */
                page_ref_sub(page, page_alloc->page_size /
@@ -201,7 +204,7 @@ static void mlx4_en_destroy_allocator(struct mlx4_en_priv *priv,
                       i, page_count(page_alloc->page));
 
                dma_unmap_page(priv->ddev, page_alloc->dma,
-                               page_alloc->page_size, PCI_DMA_FROMDEVICE);
+                               page_alloc->page_size, frag_info->dma_dir);
                while (page_alloc->page_offset + frag_info->frag_stride <
                       page_alloc->page_size) {
                        put_page(page_alloc->page);
@@ -244,6 +247,12 @@ static int mlx4_en_prepare_rx_desc(struct mlx4_en_priv *priv,
        struct mlx4_en_rx_alloc *frags = ring->rx_info +
                                        (index << priv->log_rx_info);
 
+       if (ring->page_cache.index > 0) {
+               frags[0] = ring->page_cache.buf[--ring->page_cache.index];
+               rx_desc->data[0].addr = cpu_to_be64(frags[0].dma);
+               return 0;
+       }
+
        return mlx4_en_alloc_frags(priv, rx_desc, frags, ring->page_alloc, gfp);
 }
 
@@ -502,26 +511,55 @@ void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv)
        }
 }
 
+/* When the rx ring is running in page-per-packet mode, a released frame can go
+ * directly into a small cache, to avoid unmapping or touching the page
+ * allocator. In bpf prog performance scenarios, buffers are either forwarded
+ * or dropped, never converted to skbs, so every page can come directly from
+ * this cache when it is sized to be a multiple of the napi budget.
+ */
+bool mlx4_en_rx_recycle(struct mlx4_en_rx_ring *ring,
+                       struct mlx4_en_rx_alloc *frame)
+{
+       struct mlx4_en_page_cache *cache = &ring->page_cache;
+
+       if (cache->index >= MLX4_EN_CACHE_SIZE)
+               return false;
+
+       cache->buf[cache->index++] = *frame;
+       return true;
+}
+
 void mlx4_en_destroy_rx_ring(struct mlx4_en_priv *priv,
                             struct mlx4_en_rx_ring **pring,
                             u32 size, u16 stride)
 {
        struct mlx4_en_dev *mdev = priv->mdev;
        struct mlx4_en_rx_ring *ring = *pring;
+       struct bpf_prog *old_prog;
 
+       old_prog = READ_ONCE(ring->xdp_prog);
+       if (old_prog)
+               bpf_prog_put(old_prog);
        mlx4_free_hwq_res(mdev->dev, &ring->wqres, size * stride + TXBB_SIZE);
        vfree(ring->rx_info);
        ring->rx_info = NULL;
        kfree(ring);
        *pring = NULL;
-#ifdef CONFIG_RFS_ACCEL
-       mlx4_en_cleanup_filters(priv);
-#endif
 }
 
 void mlx4_en_deactivate_rx_ring(struct mlx4_en_priv *priv,
                                struct mlx4_en_rx_ring *ring)
 {
+       int i;
+
+       for (i = 0; i < ring->page_cache.index; i++) {
+               struct mlx4_en_rx_alloc *frame = &ring->page_cache.buf[i];
+
+               dma_unmap_page(priv->ddev, frame->dma, frame->page_size,
+                              priv->frag_info[0].dma_dir);
+               put_page(frame->page);
+       }
+       ring->page_cache.index = 0;
        mlx4_en_free_rx_buf(priv, ring);
        if (ring->stride <= TXBB_SIZE)
                ring->buf -= TXBB_SIZE;
@@ -743,7 +781,10 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
        struct mlx4_en_rx_ring *ring = priv->rx_ring[cq->ring];
        struct mlx4_en_rx_alloc *frags;
        struct mlx4_en_rx_desc *rx_desc;
+       struct bpf_prog *xdp_prog;
+       int doorbell_pending;
        struct sk_buff *skb;
+       int tx_index;
        int index;
        int nr;
        unsigned int length;
@@ -759,6 +800,10 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
        if (budget <= 0)
                return polled;
 
+       xdp_prog = READ_ONCE(ring->xdp_prog);
+       doorbell_pending = 0;
+       tx_index = (priv->tx_ring_num - priv->xdp_ring_num) + cq->ring;
+
        /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
         * descriptor offset can be deduced from the CQE index instead of
         * reading 'cqe->index' */
@@ -835,6 +880,43 @@ int mlx4_en_process_rx_cq(struct net_device *dev, struct mlx4_en_cq *cq, int bud
                l2_tunnel = (dev->hw_enc_features & NETIF_F_RXCSUM) &&
                        (cqe->vlan_my_qpn & cpu_to_be32(MLX4_CQE_L2_TUNNEL));
 
+               /* A bpf program gets first chance to drop the packet. It may
+                * read bytes but not past the end of the frag.
+                */
+               if (xdp_prog) {
+                       struct xdp_buff xdp;
+                       dma_addr_t dma;
+                       u32 act;
+
+                       dma = be64_to_cpu(rx_desc->data[0].addr);
+                       dma_sync_single_for_cpu(priv->ddev, dma,
+                                               priv->frag_info[0].frag_size,
+                                               DMA_FROM_DEVICE);
+
+                       xdp.data = page_address(frags[0].page) +
+                                                       frags[0].page_offset;
+                       xdp.data_end = xdp.data + length;
+
+                       act = bpf_prog_run_xdp(xdp_prog, &xdp);
+                       switch (act) {
+                       case XDP_PASS:
+                               break;
+                       case XDP_TX:
+                               if (!mlx4_en_xmit_frame(frags, dev,
+                                                       length, tx_index,
+                                                       &doorbell_pending))
+                                       goto consumed;
+                               break;
+                       default:
+                               bpf_warn_invalid_xdp_action(act);
+                       case XDP_ABORTED:
+                       case XDP_DROP:
+                               if (mlx4_en_rx_recycle(ring, frags))
+                                       goto consumed;
+                               goto next;
+                       }
+               }
+
                if (likely(dev->features & NETIF_F_RXCSUM)) {
                        if (cqe->status & cpu_to_be16(MLX4_CQE_STATUS_TCP |
                                                      MLX4_CQE_STATUS_UDP)) {
@@ -986,6 +1068,7 @@ next:
                for (nr = 0; nr < priv->num_frags; nr++)
                        mlx4_en_free_frag(priv, frags, nr);
 
+consumed:
                ++cq->mcq.cons_index;
                index = (cq->mcq.cons_index) & ring->size_mask;
                cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
@@ -994,6 +1077,9 @@ next:
        }
 
 out:
+       if (doorbell_pending)
+               mlx4_en_xmit_doorbell(priv->tx_ring[tx_index]);
+
        AVG_PERF_COUNTER(priv->pstats.rx_coal_avg, polled);
        mlx4_cq_set_ci(&cq->mcq);
        wmb(); /* ensure HW sees CQ consumer before we post new buffers */
@@ -1061,22 +1147,35 @@ static const int frag_sizes[] = {
 
 void mlx4_en_calc_rx_buf(struct net_device *dev)
 {
+       enum dma_data_direction dma_dir = PCI_DMA_FROMDEVICE;
        struct mlx4_en_priv *priv = netdev_priv(dev);
-       /* VLAN_HLEN is added twice,to support skb vlan tagged with multiple
-        * headers. (For example: ETH_P_8021Q and ETH_P_8021AD).
-        */
-       int eff_mtu = dev->mtu + ETH_HLEN + (2 * VLAN_HLEN);
+       int eff_mtu = MLX4_EN_EFF_MTU(dev->mtu);
+       int order = MLX4_EN_ALLOC_PREFER_ORDER;
+       u32 align = SMP_CACHE_BYTES;
        int buf_size = 0;
        int i = 0;
 
+       /* bpf requires buffers to be set up as 1 packet per page.
+        * This only works when num_frags == 1.
+        */
+       if (priv->xdp_ring_num) {
+               dma_dir = PCI_DMA_BIDIRECTIONAL;
+               /* This will gain efficient xdp frame recycling at the expense
+                * of more costly truesize accounting
+                */
+               align = PAGE_SIZE;
+               order = 0;
+       }
+
        while (buf_size < eff_mtu) {
+               priv->frag_info[i].order = order;
                priv->frag_info[i].frag_size =
                        (eff_mtu > buf_size + frag_sizes[i]) ?
                                frag_sizes[i] : eff_mtu - buf_size;
                priv->frag_info[i].frag_prefix_size = buf_size;
                priv->frag_info[i].frag_stride =
-                               ALIGN(priv->frag_info[i].frag_size,
-                                     SMP_CACHE_BYTES);
+                               ALIGN(priv->frag_info[i].frag_size, align);
+               priv->frag_info[i].dma_dir = dma_dir;
                buf_size += priv->frag_info[i].frag_size;
                i++;
        }
index 76aa4d2..9df87ca 100644 (file)
@@ -196,6 +196,7 @@ int mlx4_en_activate_tx_ring(struct mlx4_en_priv *priv,
        ring->last_nr_txbb = 1;
        memset(ring->tx_info, 0, ring->size * sizeof(struct mlx4_en_tx_info));
        memset(ring->buf, 0, ring->buf_size);
+       ring->free_tx_desc = mlx4_en_free_tx_desc;
 
        ring->qp_state = MLX4_QP_STATE_RST;
        ring->doorbell_qpn = cpu_to_be32(ring->qp.qpn << 8);
@@ -265,10 +266,10 @@ static void mlx4_en_stamp_wqe(struct mlx4_en_priv *priv,
 }
 
 
-static u32 mlx4_en_free_tx_desc(struct mlx4_en_priv *priv,
-                               struct mlx4_en_tx_ring *ring,
-                               int index, u8 owner, u64 timestamp,
-                               int napi_mode)
+u32 mlx4_en_free_tx_desc(struct mlx4_en_priv *priv,
+                        struct mlx4_en_tx_ring *ring,
+                        int index, u8 owner, u64 timestamp,
+                        int napi_mode)
 {
        struct mlx4_en_tx_info *tx_info = &ring->tx_info[index];
        struct mlx4_en_tx_desc *tx_desc = ring->buf + index * TXBB_SIZE;
@@ -344,6 +345,27 @@ static u32 mlx4_en_free_tx_desc(struct mlx4_en_priv *priv,
        return tx_info->nr_txbb;
 }
 
+u32 mlx4_en_recycle_tx_desc(struct mlx4_en_priv *priv,
+                           struct mlx4_en_tx_ring *ring,
+                           int index, u8 owner, u64 timestamp,
+                           int napi_mode)
+{
+       struct mlx4_en_tx_info *tx_info = &ring->tx_info[index];
+       struct mlx4_en_rx_alloc frame = {
+               .page = tx_info->page,
+               .dma = tx_info->map0_dma,
+               .page_offset = 0,
+               .page_size = PAGE_SIZE,
+       };
+
+       if (!mlx4_en_rx_recycle(ring->recycle_ring, &frame)) {
+               dma_unmap_page(priv->ddev, tx_info->map0_dma,
+                              PAGE_SIZE, priv->frag_info[0].dma_dir);
+               put_page(tx_info->page);
+       }
+
+       return tx_info->nr_txbb;
+}
 
 int mlx4_en_free_tx_buf(struct net_device *dev, struct mlx4_en_tx_ring *ring)
 {
@@ -362,7 +384,7 @@ int mlx4_en_free_tx_buf(struct net_device *dev, struct mlx4_en_tx_ring *ring)
        }
 
        while (ring->cons != ring->prod) {
-               ring->last_nr_txbb = mlx4_en_free_tx_desc(priv, ring,
+               ring->last_nr_txbb = ring->free_tx_desc(priv, ring,
                                                ring->cons & ring->size_mask,
                                                !!(ring->cons & ring->size), 0,
                                                0 /* Non-NAPI caller */);
@@ -444,7 +466,7 @@ static bool mlx4_en_process_tx_cq(struct net_device *dev,
                                timestamp = mlx4_en_get_cqe_ts(cqe);
 
                        /* free next descriptor */
-                       last_nr_txbb = mlx4_en_free_tx_desc(
+                       last_nr_txbb = ring->free_tx_desc(
                                        priv, ring, ring_index,
                                        !!((ring_cons + txbbs_skipped) &
                                        ring->size), timestamp, napi_budget);
@@ -476,6 +498,9 @@ static bool mlx4_en_process_tx_cq(struct net_device *dev,
        ACCESS_ONCE(ring->last_nr_txbb) = last_nr_txbb;
        ACCESS_ONCE(ring->cons) = ring_cons + txbbs_skipped;
 
+       if (ring->free_tx_desc == mlx4_en_recycle_tx_desc)
+               return done < budget;
+
        netdev_tx_completed_queue(ring->tx_queue, packets, bytes);
 
        /* Wakeup Tx queue if this stopped, and ring is not full.
@@ -631,8 +656,7 @@ static int get_real_size(const struct sk_buff *skb,
 static void build_inline_wqe(struct mlx4_en_tx_desc *tx_desc,
                             const struct sk_buff *skb,
                             const struct skb_shared_info *shinfo,
-                            int real_size, u16 *vlan_tag,
-                            int tx_ind, void *fragptr)
+                            void *fragptr)
 {
        struct mlx4_wqe_inline_seg *inl = &tx_desc->inl;
        int spc = MLX4_INLINE_ALIGN - CTRL_SIZE - sizeof *inl;
@@ -700,10 +724,66 @@ static void mlx4_bf_copy(void __iomem *dst, const void *src,
        __iowrite64_copy(dst, src, bytecnt / 8);
 }
 
+void mlx4_en_xmit_doorbell(struct mlx4_en_tx_ring *ring)
+{
+       wmb();
+       /* Since there is no iowrite*_native() that writes the
+        * value as is, without byteswapping - using the one
+        * the doesn't do byteswapping in the relevant arch
+        * endianness.
+        */
+#if defined(__LITTLE_ENDIAN)
+       iowrite32(
+#else
+       iowrite32be(
+#endif
+                 ring->doorbell_qpn,
+                 ring->bf.uar->map + MLX4_SEND_DOORBELL);
+}
+
+static void mlx4_en_tx_write_desc(struct mlx4_en_tx_ring *ring,
+                                 struct mlx4_en_tx_desc *tx_desc,
+                                 union mlx4_wqe_qpn_vlan qpn_vlan,
+                                 int desc_size, int bf_index,
+                                 __be32 op_own, bool bf_ok,
+                                 bool send_doorbell)
+{
+       tx_desc->ctrl.qpn_vlan = qpn_vlan;
+
+       if (bf_ok) {
+               op_own |= htonl((bf_index & 0xffff) << 8);
+               /* Ensure new descriptor hits memory
+                * before setting ownership of this descriptor to HW
+                */
+               dma_wmb();
+               tx_desc->ctrl.owner_opcode = op_own;
+
+               wmb();
+
+               mlx4_bf_copy(ring->bf.reg + ring->bf.offset, &tx_desc->ctrl,
+                            desc_size);
+
+               wmb();
+
+               ring->bf.offset ^= ring->bf.buf_size;
+       } else {
+               /* Ensure new descriptor hits memory
+                * before setting ownership of this descriptor to HW
+                */
+               dma_wmb();
+               tx_desc->ctrl.owner_opcode = op_own;
+               if (send_doorbell)
+                       mlx4_en_xmit_doorbell(ring);
+               else
+                       ring->xmit_more++;
+       }
+}
+
 netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
 {
        struct skb_shared_info *shinfo = skb_shinfo(skb);
        struct mlx4_en_priv *priv = netdev_priv(dev);
+       union mlx4_wqe_qpn_vlan qpn_vlan = {};
        struct device *ddev = priv->ddev;
        struct mlx4_en_tx_ring *ring;
        struct mlx4_en_tx_desc *tx_desc;
@@ -715,7 +795,6 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
        int real_size;
        u32 index, bf_index;
        __be32 op_own;
-       u16 vlan_tag = 0;
        u16 vlan_proto = 0;
        int i_frag;
        int lso_header_size;
@@ -725,6 +804,7 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
        bool stop_queue;
        bool inline_ok;
        u32 ring_cons;
+       bool bf_ok;
 
        tx_ind = skb_get_queue_mapping(skb);
        ring = priv->tx_ring[tx_ind];
@@ -749,9 +829,17 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
                goto tx_drop;
        }
 
+       bf_ok = ring->bf_enabled;
        if (skb_vlan_tag_present(skb)) {
-               vlan_tag = skb_vlan_tag_get(skb);
+               qpn_vlan.vlan_tag = cpu_to_be16(skb_vlan_tag_get(skb));
                vlan_proto = be16_to_cpu(skb->vlan_proto);
+               if (vlan_proto == ETH_P_8021AD)
+                       qpn_vlan.ins_vlan = MLX4_WQE_CTRL_INS_SVLAN;
+               else if (vlan_proto == ETH_P_8021Q)
+                       qpn_vlan.ins_vlan = MLX4_WQE_CTRL_INS_CVLAN;
+               else
+                       qpn_vlan.ins_vlan = 0;
+               bf_ok = false;
        }
 
        netdev_txq_bql_enqueue_prefetchw(ring->tx_queue);
@@ -771,6 +859,7 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
        else {
                tx_desc = (struct mlx4_en_tx_desc *) ring->bounce_buf;
                bounce = true;
+               bf_ok = false;
        }
 
        /* Save skb in tx_info ring */
@@ -907,8 +996,7 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
        AVG_PERF_COUNTER(priv->pstats.tx_pktsz_avg, skb->len);
 
        if (tx_info->inl)
-               build_inline_wqe(tx_desc, skb, shinfo, real_size, &vlan_tag,
-                                tx_ind, fragptr);
+               build_inline_wqe(tx_desc, skb, shinfo, fragptr);
 
        if (skb->encapsulation) {
                union {
@@ -946,60 +1034,15 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
 
        real_size = (real_size / 16) & 0x3f;
 
-       if (ring->bf_enabled && desc_size <= MAX_BF && !bounce &&
-           !skb_vlan_tag_present(skb) && send_doorbell) {
-               tx_desc->ctrl.bf_qpn = ring->doorbell_qpn |
-                                      cpu_to_be32(real_size);
-
-               op_own |= htonl((bf_index & 0xffff) << 8);
-               /* Ensure new descriptor hits memory
-                * before setting ownership of this descriptor to HW
-                */
-               dma_wmb();
-               tx_desc->ctrl.owner_opcode = op_own;
-
-               wmb();
+       bf_ok &= desc_size <= MAX_BF && send_doorbell;
 
-               mlx4_bf_copy(ring->bf.reg + ring->bf.offset, &tx_desc->ctrl,
-                            desc_size);
-
-               wmb();
-
-               ring->bf.offset ^= ring->bf.buf_size;
-       } else {
-               tx_desc->ctrl.vlan_tag = cpu_to_be16(vlan_tag);
-               if (vlan_proto == ETH_P_8021AD)
-                       tx_desc->ctrl.ins_vlan = MLX4_WQE_CTRL_INS_SVLAN;
-               else if (vlan_proto == ETH_P_8021Q)
-                       tx_desc->ctrl.ins_vlan = MLX4_WQE_CTRL_INS_CVLAN;
-               else
-                       tx_desc->ctrl.ins_vlan = 0;
-
-               tx_desc->ctrl.fence_size = real_size;
+       if (bf_ok)
+               qpn_vlan.bf_qpn = ring->doorbell_qpn | cpu_to_be32(real_size);
+       else
+               qpn_vlan.fence_size = real_size;
 
-               /* Ensure new descriptor hits memory
-                * before setting ownership of this descriptor to HW
-                */
-               dma_wmb();
-               tx_desc->ctrl.owner_opcode = op_own;
-               if (send_doorbell) {
-                       wmb();
-                       /* Since there is no iowrite*_native() that writes the
-                        * value as is, without byteswapping - using the one
-                        * the doesn't do byteswapping in the relevant arch
-                        * endianness.
-                        */
-#if defined(__LITTLE_ENDIAN)
-                       iowrite32(
-#else
-                       iowrite32be(
-#endif
-                                 ring->doorbell_qpn,
-                                 ring->bf.uar->map + MLX4_SEND_DOORBELL);
-               } else {
-                       ring->xmit_more++;
-               }
-       }
+       mlx4_en_tx_write_desc(ring, tx_desc, qpn_vlan, desc_size, bf_index,
+                             op_own, bf_ok, send_doorbell);
 
        if (unlikely(stop_queue)) {
                /* If queue was emptied after the if (stop_queue) , and before
@@ -1034,3 +1077,106 @@ tx_drop:
        return NETDEV_TX_OK;
 }
 
+netdev_tx_t mlx4_en_xmit_frame(struct mlx4_en_rx_alloc *frame,
+                              struct net_device *dev, unsigned int length,
+                              int tx_ind, int *doorbell_pending)
+{
+       struct mlx4_en_priv *priv = netdev_priv(dev);
+       union mlx4_wqe_qpn_vlan qpn_vlan = {};
+       struct mlx4_en_tx_ring *ring;
+       struct mlx4_en_tx_desc *tx_desc;
+       struct mlx4_wqe_data_seg *data;
+       struct mlx4_en_tx_info *tx_info;
+       int index, bf_index;
+       bool send_doorbell;
+       int nr_txbb = 1;
+       bool stop_queue;
+       dma_addr_t dma;
+       int real_size;
+       __be32 op_own;
+       u32 ring_cons;
+       bool bf_ok;
+
+       BUILD_BUG_ON_MSG(ALIGN(CTRL_SIZE + DS_SIZE, TXBB_SIZE) != TXBB_SIZE,
+                        "mlx4_en_xmit_frame requires minimum size tx desc");
+
+       ring = priv->tx_ring[tx_ind];
+
+       if (!priv->port_up)
+               goto tx_drop;
+
+       if (mlx4_en_is_tx_ring_full(ring))
+               goto tx_drop;
+
+       /* fetch ring->cons far ahead before needing it to avoid stall */
+       ring_cons = READ_ONCE(ring->cons);
+
+       index = ring->prod & ring->size_mask;
+       tx_info = &ring->tx_info[index];
+
+       bf_ok = ring->bf_enabled;
+
+       /* Track current inflight packets for performance analysis */
+       AVG_PERF_COUNTER(priv->pstats.inflight_avg,
+                        (u32)(ring->prod - ring_cons - 1));
+
+       bf_index = ring->prod;
+       tx_desc = ring->buf + index * TXBB_SIZE;
+       data = &tx_desc->data;
+
+       dma = frame->dma;
+
+       tx_info->page = frame->page;
+       frame->page = NULL;
+       tx_info->map0_dma = dma;
+       tx_info->map0_byte_count = length;
+       tx_info->nr_txbb = nr_txbb;
+       tx_info->nr_bytes = max_t(unsigned int, length, ETH_ZLEN);
+       tx_info->data_offset = (void *)data - (void *)tx_desc;
+       tx_info->ts_requested = 0;
+       tx_info->nr_maps = 1;
+       tx_info->linear = 1;
+       tx_info->inl = 0;
+
+       dma_sync_single_for_device(priv->ddev, dma, length, PCI_DMA_TODEVICE);
+
+       data->addr = cpu_to_be64(dma);
+       data->lkey = ring->mr_key;
+       dma_wmb();
+       data->byte_count = cpu_to_be32(length);
+
+       /* tx completion can avoid cache line miss for common cases */
+       tx_desc->ctrl.srcrb_flags = priv->ctrl_flags;
+
+       op_own = cpu_to_be32(MLX4_OPCODE_SEND) |
+               ((ring->prod & ring->size) ?
+                cpu_to_be32(MLX4_EN_BIT_DESC_OWN) : 0);
+
+       ring->packets++;
+       ring->bytes += tx_info->nr_bytes;
+       AVG_PERF_COUNTER(priv->pstats.tx_pktsz_avg, length);
+
+       ring->prod += nr_txbb;
+
+       stop_queue = mlx4_en_is_tx_ring_full(ring);
+       send_doorbell = stop_queue ||
+                               *doorbell_pending > MLX4_EN_DOORBELL_BUDGET;
+       bf_ok &= send_doorbell;
+
+       real_size = ((CTRL_SIZE + nr_txbb * DS_SIZE) / 16) & 0x3f;
+
+       if (bf_ok)
+               qpn_vlan.bf_qpn = ring->doorbell_qpn | cpu_to_be32(real_size);
+       else
+               qpn_vlan.fence_size = real_size;
+
+       mlx4_en_tx_write_desc(ring, tx_desc, qpn_vlan, TXBB_SIZE, bf_index,
+                             op_own, bf_ok, send_doorbell);
+       *doorbell_pending = send_doorbell ? 0 : *doorbell_pending + 1;
+
+       return NETDEV_TX_OK;
+
+tx_drop:
+       ring->tx_dropped++;
+       return NETDEV_TX_BUSY;
+}
index d39bf59..2c2913d 100644 (file)
@@ -132,6 +132,7 @@ enum {
                                         MLX4_EN_NUM_UP)
 
 #define MLX4_EN_DEFAULT_TX_WORK                256
+#define MLX4_EN_DOORBELL_BUDGET                8
 
 /* Target number of packets to coalesce with interrupt moderation */
 #define MLX4_EN_RX_COAL_TARGET 44
@@ -164,6 +165,10 @@ enum {
 #define MLX4_LOOPBACK_TEST_PAYLOAD (HEADER_COPY_SIZE - ETH_HLEN)
 
 #define MLX4_EN_MIN_MTU                46
+/* VLAN_HLEN is added twice,to support skb vlan tagged with multiple
+ * headers. (For example: ETH_P_8021Q and ETH_P_8021AD).
+ */
+#define MLX4_EN_EFF_MTU(mtu)   ((mtu) + ETH_HLEN + (2 * VLAN_HLEN))
 #define ETH_BCAST              0xffffffffffffULL
 
 #define MLX4_EN_LOOPBACK_RETRIES       5
@@ -215,7 +220,10 @@ enum cq_type {
 
 
 struct mlx4_en_tx_info {
-       struct sk_buff *skb;
+       union {
+               struct sk_buff *skb;
+               struct page *page;
+       };
        dma_addr_t      map0_dma;
        u32             map0_byte_count;
        u32             nr_txbb;
@@ -255,6 +263,14 @@ struct mlx4_en_rx_alloc {
        u32             page_size;
 };
 
+#define MLX4_EN_CACHE_SIZE (2 * NAPI_POLL_WEIGHT)
+struct mlx4_en_page_cache {
+       u32 index;
+       struct mlx4_en_rx_alloc buf[MLX4_EN_CACHE_SIZE];
+};
+
+struct mlx4_en_priv;
+
 struct mlx4_en_tx_ring {
        /* cache line used and dirtied in tx completion
         * (mlx4_en_free_tx_buf())
@@ -288,6 +304,11 @@ struct mlx4_en_tx_ring {
        __be32                  mr_key;
        void                    *buf;
        struct mlx4_en_tx_info  *tx_info;
+       struct mlx4_en_rx_ring  *recycle_ring;
+       u32                     (*free_tx_desc)(struct mlx4_en_priv *priv,
+                                               struct mlx4_en_tx_ring *ring,
+                                               int index, u8 owner,
+                                               u64 timestamp, int napi_mode);
        u8                      *bounce_buf;
        struct mlx4_qp_context  context;
        int                     qpn;
@@ -319,6 +340,8 @@ struct mlx4_en_rx_ring {
        u8  fcs_del;
        void *buf;
        void *rx_info;
+       struct bpf_prog *xdp_prog;
+       struct mlx4_en_page_cache page_cache;
        unsigned long bytes;
        unsigned long packets;
        unsigned long csum_ok;
@@ -353,12 +376,14 @@ struct mlx4_en_port_profile {
        u32 rx_ring_num;
        u32 tx_ring_size;
        u32 rx_ring_size;
+       u8 num_tx_rings_p_up;
        u8 rx_pause;
        u8 rx_ppp;
        u8 tx_pause;
        u8 tx_ppp;
        int rss_rings;
        int inline_thold;
+       struct hwtstamp_config hwtstamp_config;
 };
 
 struct mlx4_en_profile {
@@ -438,7 +463,9 @@ struct mlx4_en_mc_list {
 struct mlx4_en_frag_info {
        u16 frag_size;
        u16 frag_prefix_size;
-       u16 frag_stride;
+       u32 frag_stride;
+       enum dma_data_direction dma_dir;
+       int order;
 };
 
 #ifdef CONFIG_MLX4_EN_DCB
@@ -558,6 +585,7 @@ struct mlx4_en_priv {
        struct mlx4_en_frag_info frag_info[MLX4_EN_MAX_RX_FRAGS];
        u16 num_frags;
        u16 log_rx_info;
+       int xdp_ring_num;
 
        struct mlx4_en_tx_ring **tx_ring;
        struct mlx4_en_rx_ring *rx_ring[MAX_RX_RINGS];
@@ -647,8 +675,11 @@ void mlx4_en_set_stats_bitmap(struct mlx4_dev *dev,
                              u8 rx_ppp, u8 rx_pause,
                              u8 tx_ppp, u8 tx_pause);
 
-void mlx4_en_free_resources(struct mlx4_en_priv *priv);
-int mlx4_en_alloc_resources(struct mlx4_en_priv *priv);
+int mlx4_en_try_alloc_resources(struct mlx4_en_priv *priv,
+                               struct mlx4_en_priv *tmp,
+                               struct mlx4_en_port_profile *prof);
+void mlx4_en_safe_replace_resources(struct mlx4_en_priv *priv,
+                                   struct mlx4_en_priv *tmp);
 
 int mlx4_en_create_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq **pcq,
                      int entries, int ring, enum cq_type mode, int node);
@@ -663,6 +694,12 @@ void mlx4_en_tx_irq(struct mlx4_cq *mcq);
 u16 mlx4_en_select_queue(struct net_device *dev, struct sk_buff *skb,
                         void *accel_priv, select_queue_fallback_t fallback);
 netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev);
+netdev_tx_t mlx4_en_xmit_frame(struct mlx4_en_rx_alloc *frame,
+                              struct net_device *dev, unsigned int length,
+                              int tx_ind, int *doorbell_pending);
+void mlx4_en_xmit_doorbell(struct mlx4_en_tx_ring *ring);
+bool mlx4_en_rx_recycle(struct mlx4_en_rx_ring *ring,
+                       struct mlx4_en_rx_alloc *frame);
 
 int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
                           struct mlx4_en_tx_ring **pring,
@@ -691,6 +728,14 @@ int mlx4_en_process_rx_cq(struct net_device *dev,
                          int budget);
 int mlx4_en_poll_rx_cq(struct napi_struct *napi, int budget);
 int mlx4_en_poll_tx_cq(struct napi_struct *napi, int budget);
+u32 mlx4_en_free_tx_desc(struct mlx4_en_priv *priv,
+                        struct mlx4_en_tx_ring *ring,
+                        int index, u8 owner, u64 timestamp,
+                        int napi_mode);
+u32 mlx4_en_recycle_tx_desc(struct mlx4_en_priv *priv,
+                           struct mlx4_en_tx_ring *ring,
+                           int index, u8 owner, u64 timestamp,
+                           int napi_mode);
 void mlx4_en_fill_qp_context(struct mlx4_en_priv *priv, int size, int stride,
                int is_tx, int rss, int qpn, int cqn, int user_prio,
                struct mlx4_qp_context *context);
index 611ab55..bdcb699 100644 (file)
@@ -1464,6 +1464,11 @@ static int mlx5e_open_channels(struct mlx5e_priv *priv)
                        goto err_close_channels;
        }
 
+       /* FIXME: This is a W/A for tx timeout watch dog false alarm when
+        * polling for inactive tx queues.
+        */
+       netif_tx_start_all_queues(priv->netdev);
+
        kfree(cparam);
        return 0;
 
@@ -1483,6 +1488,12 @@ static void mlx5e_close_channels(struct mlx5e_priv *priv)
 {
        int i;
 
+       /* FIXME: This is a W/A only for tx timeout watch dog false alarm when
+        * polling for inactive tx queues.
+        */
+       netif_tx_stop_all_queues(priv->netdev);
+       netif_tx_disable(priv->netdev);
+
        for (i = 0; i < priv->params.num_channels; i++)
                mlx5e_close_channel(priv->channel[i]);
 
@@ -2774,7 +2785,7 @@ static void mlx5e_tx_timeout(struct net_device *dev)
        for (i = 0; i < priv->params.num_channels * priv->params.num_tc; i++) {
                struct mlx5e_sq *sq = priv->txq_to_sq_map[i];
 
-               if (!netif_tx_queue_stopped(netdev_get_tx_queue(dev, i)))
+               if (!netif_xmit_stopped(netdev_get_tx_queue(dev, i)))
                        continue;
                sched_work = true;
                set_bit(MLX5E_SQ_STATE_TX_TIMEOUT, &sq->state);
index 5ef02f0..1c7d8b8 100644 (file)
 #include <generated/utsrelease.h>
 #include <linux/mlx5/fs.h>
 #include <net/switchdev.h>
+#include <net/pkt_cls.h>
 
 #include "eswitch.h"
 #include "en.h"
+#include "en_tc.h"
 
 static const char mlx5e_rep_driver_name[] = "mlx5e_rep";
 
@@ -201,6 +203,10 @@ void mlx5e_nic_rep_unload(struct mlx5_eswitch *esw,
 
        if (test_bit(MLX5E_STATE_OPENED, &priv->state))
                mlx5e_remove_sqs_fwd_rules(priv);
+
+       /* clean (and re-init) existing uplink offloaded TC rules */
+       mlx5e_tc_cleanup(priv);
+       mlx5e_tc_init(priv);
 }
 
 static int mlx5e_rep_get_phys_port_name(struct net_device *dev,
@@ -217,6 +223,29 @@ static int mlx5e_rep_get_phys_port_name(struct net_device *dev,
        return 0;
 }
 
+static int mlx5e_rep_ndo_setup_tc(struct net_device *dev, u32 handle,
+                                 __be16 proto, struct tc_to_netdev *tc)
+{
+       struct mlx5e_priv *priv = netdev_priv(dev);
+
+       if (TC_H_MAJ(handle) != TC_H_MAJ(TC_H_INGRESS))
+               return -EOPNOTSUPP;
+
+       switch (tc->type) {
+       case TC_SETUP_CLSFLOWER:
+               switch (tc->cls_flower->command) {
+               case TC_CLSFLOWER_REPLACE:
+                       return mlx5e_configure_flower(priv, proto, tc->cls_flower);
+               case TC_CLSFLOWER_DESTROY:
+                       return mlx5e_delete_flower(priv, tc->cls_flower);
+               case TC_CLSFLOWER_STATS:
+                       return mlx5e_stats_flower(priv, tc->cls_flower);
+               }
+       default:
+               return -EOPNOTSUPP;
+       }
+}
+
 static const struct switchdev_ops mlx5e_rep_switchdev_ops = {
        .switchdev_port_attr_get        = mlx5e_attr_get,
 };
@@ -226,6 +255,7 @@ static const struct net_device_ops mlx5e_netdev_ops_rep = {
        .ndo_stop                = mlx5e_close,
        .ndo_start_xmit          = mlx5e_xmit,
        .ndo_get_phys_port_name  = mlx5e_rep_get_phys_port_name,
+       .ndo_setup_tc            = mlx5e_rep_ndo_setup_tc,
        .ndo_get_stats64         = mlx5e_get_stats,
 };
 
@@ -279,7 +309,8 @@ static void mlx5e_build_rep_netdev(struct net_device *netdev)
        netdev->switchdev_ops = &mlx5e_rep_switchdev_ops;
 #endif
 
-       netdev->features         |= NETIF_F_VLAN_CHALLENGED;
+       netdev->features         |= NETIF_F_VLAN_CHALLENGED | NETIF_F_HW_TC;
+       netdev->hw_features      |= NETIF_F_HW_TC;
 
        eth_hw_addr_random(netdev);
 }
@@ -323,8 +354,14 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv)
        }
        rep->vport_rx_rule = flow_rule;
 
+       err = mlx5e_tc_init(priv);
+       if (err)
+               goto err_del_flow_rule;
+
        return 0;
 
+err_del_flow_rule:
+       mlx5_del_flow_rule(rep->vport_rx_rule);
 err_destroy_direct_tirs:
        mlx5e_destroy_direct_tirs(priv);
 err_destroy_direct_rqts:
@@ -338,6 +375,7 @@ static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv)
        struct mlx5_eswitch_rep *rep = priv->ppriv;
        int i;
 
+       mlx5e_tc_cleanup(priv);
        mlx5_del_flow_rule(rep->vport_rx_rule);
        mlx5e_destroy_direct_tirs(priv);
        for (i = 0; i < priv->params.num_channels; i++)
index 3261e8b..0f19b01 100644 (file)
 #include <linux/mlx5/fs.h>
 #include <linux/mlx5/device.h>
 #include <linux/rhashtable.h>
+#include <net/switchdev.h>
+#include <net/tc_act/tc_mirred.h>
 #include "en.h"
 #include "en_tc.h"
+#include "eswitch.h"
 
 struct mlx5e_tc_flow {
        struct rhash_head       node;
@@ -49,9 +52,9 @@ struct mlx5e_tc_flow {
 #define MLX5E_TC_TABLE_NUM_ENTRIES 1024
 #define MLX5E_TC_TABLE_NUM_GROUPS 4
 
-static struct mlx5_flow_rule *mlx5e_tc_add_flow(struct mlx5e_priv *priv,
-                                               struct mlx5_flow_spec *spec,
-                                               u32 action, u32 flow_tag)
+static struct mlx5_flow_rule *mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv,
+                                                   struct mlx5_flow_spec *spec,
+                                                   u32 action, u32 flow_tag)
 {
        struct mlx5_core_dev *dev = priv->mdev;
        struct mlx5_flow_destination dest = { 0 };
@@ -62,7 +65,7 @@ static struct mlx5_flow_rule *mlx5e_tc_add_flow(struct mlx5e_priv *priv,
        if (action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) {
                dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
                dest.ft = priv->fs.vlan.ft.t;
-       } else {
+       } else if (action & MLX5_FLOW_CONTEXT_ACTION_COUNT) {
                counter = mlx5_fc_create(dev, true);
                if (IS_ERR(counter))
                        return ERR_CAST(counter);
@@ -109,6 +112,22 @@ err_create_ft:
        return rule;
 }
 
+static struct mlx5_flow_rule *mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
+                                                   struct mlx5_flow_spec *spec,
+                                                   u32 action, u32 dst_vport)
+{
+       struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+       struct mlx5_eswitch_rep *rep = priv->ppriv;
+       u32 src_vport;
+
+       if (rep->vport) /* set source vport for the flow */
+               src_vport = rep->vport;
+       else
+               src_vport = FDB_UPLINK_VPORT;
+
+       return mlx5_eswitch_add_offloaded_rule(esw, spec, action, src_vport, dst_vport);
+}
+
 static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
                              struct mlx5_flow_rule *rule)
 {
@@ -120,7 +139,7 @@ static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
 
        mlx5_fc_destroy(priv->mdev, counter);
 
-       if (!mlx5e_tc_num_filters(priv)) {
+       if (!mlx5e_tc_num_filters(priv) && (priv->fs.tc.t)) {
                mlx5_destroy_flow_table(priv->fs.tc.t);
                priv->fs.tc.t = NULL;
        }
@@ -295,8 +314,8 @@ static int parse_cls_flower(struct mlx5e_priv *priv, struct mlx5_flow_spec *spec
        return 0;
 }
 
-static int parse_tc_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
-                           u32 *action, u32 *flow_tag)
+static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
+                               u32 *action, u32 *flow_tag)
 {
        const struct tc_action *a;
 
@@ -339,16 +358,66 @@ static int parse_tc_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
        return 0;
 }
 
+static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
+                               u32 *action, u32 *dest_vport)
+{
+       const struct tc_action *a;
+
+       if (tc_no_actions(exts))
+               return -EINVAL;
+
+       *action = 0;
+
+       tc_for_each_action(a, exts) {
+               /* Only support a single action per rule */
+               if (*action)
+                       return -EINVAL;
+
+               if (is_tcf_gact_shot(a)) {
+                       *action = MLX5_FLOW_CONTEXT_ACTION_DROP |
+                                 MLX5_FLOW_CONTEXT_ACTION_COUNT;
+                       continue;
+               }
+
+               if (is_tcf_mirred_redirect(a)) {
+                       int ifindex = tcf_mirred_ifindex(a);
+                       struct net_device *out_dev;
+                       struct mlx5e_priv *out_priv;
+                       struct mlx5_eswitch_rep *out_rep;
+
+                       out_dev = __dev_get_by_index(dev_net(priv->netdev), ifindex);
+
+                       if (!switchdev_port_same_parent_id(priv->netdev, out_dev)) {
+                               pr_err("devices %s %s not on same switch HW, can't offload forwarding\n",
+                                      priv->netdev->name, out_dev->name);
+                               return -EINVAL;
+                       }
+
+                       out_priv = netdev_priv(out_dev);
+                       out_rep  = out_priv->ppriv;
+                       if (out_rep->vport == 0)
+                               *dest_vport = FDB_UPLINK_VPORT;
+                       else
+                               *dest_vport = out_rep->vport;
+                       *action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+                       continue;
+               }
+
+               return -EINVAL;
+       }
+       return 0;
+}
+
 int mlx5e_configure_flower(struct mlx5e_priv *priv, __be16 protocol,
                           struct tc_cls_flower_offload *f)
 {
        struct mlx5e_tc_table *tc = &priv->fs.tc;
        int err = 0;
-       u32 flow_tag;
-       u32 action;
+       u32 flow_tag, action, dest_vport = 0;
        struct mlx5e_tc_flow *flow;
        struct mlx5_flow_spec *spec;
        struct mlx5_flow_rule *old = NULL;
+       struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
 
        flow = rhashtable_lookup_fast(&tc->ht, &f->cookie,
                                      tc->ht_params);
@@ -369,28 +438,35 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv, __be16 protocol,
        if (err < 0)
                goto err_free;
 
-       err = parse_tc_actions(priv, f->exts, &action, &flow_tag);
-       if (err < 0)
+       if (esw && esw->mode == SRIOV_OFFLOADS) {
+               err = parse_tc_fdb_actions(priv, f->exts, &action, &dest_vport);
+               if (err < 0)
+                       goto err_free;
+               flow->rule = mlx5e_tc_add_fdb_flow(priv, spec, action, dest_vport);
+       } else {
+               err = parse_tc_nic_actions(priv, f->exts, &action, &flow_tag);
+               if (err < 0)
+                       goto err_free;
+               flow->rule = mlx5e_tc_add_nic_flow(priv, spec, action, flow_tag);
+       }
+
+       if (IS_ERR(flow->rule)) {
+               err = PTR_ERR(flow->rule);
                goto err_free;
+       }
 
        err = rhashtable_insert_fast(&tc->ht, &flow->node,
                                     tc->ht_params);
        if (err)
-               goto err_free;
-
-       flow->rule = mlx5e_tc_add_flow(priv, spec, action, flow_tag);
-       if (IS_ERR(flow->rule)) {
-               err = PTR_ERR(flow->rule);
-               goto err_hash_del;
-       }
+               goto err_del_rule;
 
        if (old)
                mlx5e_tc_del_flow(priv, old);
 
        goto out;
 
-err_hash_del:
-       rhashtable_remove_fast(&tc->ht, &flow->node, tc->ht_params);
+err_del_rule:
+       mlx5_del_flow_rule(flow->rule);
 
 err_free:
        if (!old)
index 7b45e6a..c0b0560 100644 (file)
@@ -145,6 +145,7 @@ struct mlx5_eswitch_fdb {
                } legacy;
 
                struct offloads_fdb {
+                       struct mlx5_flow_table *fdb;
                        struct mlx5_flow_group *send_to_vport_grp;
                        struct mlx5_flow_group *miss_grp;
                        struct mlx5_flow_rule  *miss_rule;
@@ -221,6 +222,12 @@ int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
                                 int vport,
                                 struct ifla_vf_stats *vf_stats);
 
+struct mlx5_flow_spec;
+
+struct mlx5_flow_rule *
+mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
+                               struct mlx5_flow_spec *spec,
+                               u32 action, u32 src_vport, u32 dst_vport);
 struct mlx5_flow_rule *
 mlx5_eswitch_create_vport_rx_rule(struct mlx5_eswitch *esw, int vport, u32 tirn);
 
index 1842dfb..a357e8e 100644 (file)
 #include "mlx5_core.h"
 #include "eswitch.h"
 
+enum {
+       FDB_FAST_PATH = 0,
+       FDB_SLOW_PATH
+};
+
+struct mlx5_flow_rule *
+mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
+                               struct mlx5_flow_spec *spec,
+                               u32 action, u32 src_vport, u32 dst_vport)
+{
+       struct mlx5_flow_destination dest = { 0 };
+       struct mlx5_fc *counter = NULL;
+       struct mlx5_flow_rule *rule;
+       void *misc;
+
+       if (esw->mode != SRIOV_OFFLOADS)
+               return ERR_PTR(-EOPNOTSUPP);
+
+       if (action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) {
+               dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
+               dest.vport_num = dst_vport;
+               action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+       } else if (action & MLX5_FLOW_CONTEXT_ACTION_COUNT) {
+               counter = mlx5_fc_create(esw->dev, true);
+               if (IS_ERR(counter))
+                       return ERR_CAST(counter);
+               dest.type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
+               dest.counter = counter;
+       }
+
+       misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, misc_parameters);
+       MLX5_SET(fte_match_set_misc, misc, source_port, src_vport);
+
+       misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria, misc_parameters);
+       MLX5_SET_TO_ONES(fte_match_set_misc, misc, source_port);
+
+       spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS |
+                                     MLX5_MATCH_MISC_PARAMETERS;
+
+       rule = mlx5_add_flow_rule((struct mlx5_flow_table *)esw->fdb_table.fdb,
+                                 spec, action, 0, &dest);
+
+       if (IS_ERR(rule))
+               mlx5_fc_destroy(esw->dev, counter);
+
+       return rule;
+}
+
 static struct mlx5_flow_rule *
 mlx5_eswitch_add_send_to_vport_rule(struct mlx5_eswitch *esw, int vport, u32 sqn)
 {
@@ -149,7 +197,7 @@ static int esw_add_fdb_miss_rule(struct mlx5_eswitch *esw)
        dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
        dest.vport_num = 0;
 
-       flow_rule = mlx5_add_flow_rule(esw->fdb_table.fdb, spec,
+       flow_rule = mlx5_add_flow_rule(esw->fdb_table.offloads.fdb, spec,
                                       MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
                                       0, &dest);
        if (IS_ERR(flow_rule)) {
@@ -165,6 +213,8 @@ out:
 }
 
 #define MAX_PF_SQ 256
+#define ESW_OFFLOADS_NUM_ENTRIES (1 << 13) /* 8K */
+#define ESW_OFFLOADS_NUM_GROUPS  4
 
 static int esw_create_offloads_fdb_table(struct mlx5_eswitch *esw, int nvports)
 {
@@ -190,15 +240,25 @@ static int esw_create_offloads_fdb_table(struct mlx5_eswitch *esw, int nvports)
        esw_debug(dev, "Create offloads FDB table, log_max_size(%d)\n",
                  MLX5_CAP_ESW_FLOWTABLE_FDB(dev, log_max_ft_size));
 
-       table_size = nvports + MAX_PF_SQ + 1;
-       fdb = mlx5_create_flow_table(root_ns, 0, table_size, 0);
+       fdb = mlx5_create_auto_grouped_flow_table(root_ns, FDB_FAST_PATH,
+                                                 ESW_OFFLOADS_NUM_ENTRIES,
+                                                 ESW_OFFLOADS_NUM_GROUPS, 0);
        if (IS_ERR(fdb)) {
                err = PTR_ERR(fdb);
-               esw_warn(dev, "Failed to create FDB Table err %d\n", err);
-               goto fdb_err;
+               esw_warn(dev, "Failed to create Fast path FDB Table err %d\n", err);
+               goto fast_fdb_err;
        }
        esw->fdb_table.fdb = fdb;
 
+       table_size = nvports + MAX_PF_SQ + 1;
+       fdb = mlx5_create_flow_table(root_ns, FDB_SLOW_PATH, table_size, 0);
+       if (IS_ERR(fdb)) {
+               err = PTR_ERR(fdb);
+               esw_warn(dev, "Failed to create slow path FDB Table err %d\n", err);
+               goto slow_fdb_err;
+       }
+       esw->fdb_table.offloads.fdb = fdb;
+
        /* create send-to-vport group */
        memset(flow_group_in, 0, inlen);
        MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
@@ -247,8 +307,10 @@ miss_rule_err:
 miss_err:
        mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp);
 send_vport_err:
-       mlx5_destroy_flow_table(fdb);
-fdb_err:
+       mlx5_destroy_flow_table(esw->fdb_table.offloads.fdb);
+slow_fdb_err:
+       mlx5_destroy_flow_table(esw->fdb_table.fdb);
+fast_fdb_err:
 ns_err:
        kvfree(flow_group_in);
        return err;
@@ -264,6 +326,7 @@ static void esw_destroy_offloads_fdb_table(struct mlx5_eswitch *esw)
        mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp);
        mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp);
 
+       mlx5_destroy_flow_table(esw->fdb_table.offloads.fdb);
        mlx5_destroy_flow_table(esw->fdb_table.fdb);
 }
 
index a5bb6b6..9134010 100644 (file)
@@ -413,3 +413,70 @@ int mlx5_cmd_fc_query(struct mlx5_core_dev *dev, u16 id,
 
        return 0;
 }
+
+struct mlx5_cmd_fc_bulk {
+       u16 id;
+       int num;
+       int outlen;
+       u32 out[0];
+};
+
+struct mlx5_cmd_fc_bulk *
+mlx5_cmd_fc_bulk_alloc(struct mlx5_core_dev *dev, u16 id, int num)
+{
+       struct mlx5_cmd_fc_bulk *b;
+       int outlen = sizeof(*b) +
+               MLX5_ST_SZ_BYTES(query_flow_counter_out) +
+               MLX5_ST_SZ_BYTES(traffic_counter) * num;
+
+       b = kzalloc(outlen, GFP_KERNEL);
+       if (!b)
+               return NULL;
+
+       b->id = id;
+       b->num = num;
+       b->outlen = outlen;
+
+       return b;
+}
+
+void mlx5_cmd_fc_bulk_free(struct mlx5_cmd_fc_bulk *b)
+{
+       kfree(b);
+}
+
+int
+mlx5_cmd_fc_bulk_query(struct mlx5_core_dev *dev, struct mlx5_cmd_fc_bulk *b)
+{
+       u32 in[MLX5_ST_SZ_DW(query_flow_counter_in)];
+
+       memset(in, 0, sizeof(in));
+
+       MLX5_SET(query_flow_counter_in, in, opcode,
+                MLX5_CMD_OP_QUERY_FLOW_COUNTER);
+       MLX5_SET(query_flow_counter_in, in, op_mod, 0);
+       MLX5_SET(query_flow_counter_in, in, flow_counter_id, b->id);
+       MLX5_SET(query_flow_counter_in, in, num_of_counters, b->num);
+
+       return mlx5_cmd_exec_check_status(dev, in, sizeof(in),
+                                         b->out, b->outlen);
+}
+
+void mlx5_cmd_fc_bulk_get(struct mlx5_core_dev *dev,
+                         struct mlx5_cmd_fc_bulk *b, u16 id,
+                         u64 *packets, u64 *bytes)
+{
+       int index = id - b->id;
+       void *stats;
+
+       if (index < 0 || index >= b->num) {
+               mlx5_core_warn(dev, "Flow counter id (0x%x) out of range (0x%x..0x%x). Counter ignored.\n",
+                              id, b->id, b->id + b->num - 1);
+               return;
+       }
+
+       stats = MLX5_ADDR_OF(query_flow_counter_out, b->out,
+                            flow_statistics[index]);
+       *packets = MLX5_GET64(traffic_counter, stats, packets);
+       *bytes = MLX5_GET64(traffic_counter, stats, octets);
+}
index fc4f7b8..158844c 100644 (file)
@@ -76,4 +76,16 @@ int mlx5_cmd_fc_alloc(struct mlx5_core_dev *dev, u16 *id);
 int mlx5_cmd_fc_free(struct mlx5_core_dev *dev, u16 id);
 int mlx5_cmd_fc_query(struct mlx5_core_dev *dev, u16 id,
                      u64 *packets, u64 *bytes);
+
+struct mlx5_cmd_fc_bulk;
+
+struct mlx5_cmd_fc_bulk *
+mlx5_cmd_fc_bulk_alloc(struct mlx5_core_dev *dev, u16 id, int num);
+void mlx5_cmd_fc_bulk_free(struct mlx5_cmd_fc_bulk *b);
+int
+mlx5_cmd_fc_bulk_query(struct mlx5_core_dev *dev, struct mlx5_cmd_fc_bulk *b);
+void mlx5_cmd_fc_bulk_get(struct mlx5_core_dev *dev,
+                         struct mlx5_cmd_fc_bulk *b, u16 id,
+                         u64 *packets, u64 *bytes);
+
 #endif
index b0a1304..75bb8c8 100644 (file)
@@ -1712,15 +1712,21 @@ static int init_fdb_root_ns(struct mlx5_flow_steering *steering)
        if (!steering->fdb_root_ns)
                return -ENOMEM;
 
-       /* Create single prio */
        prio = fs_create_prio(&steering->fdb_root_ns->ns, 0, 1);
-       if (IS_ERR(prio)) {
-               cleanup_root_ns(steering->fdb_root_ns);
-               steering->fdb_root_ns = NULL;
-               return PTR_ERR(prio);
-       } else {
-               return 0;
-       }
+       if (IS_ERR(prio))
+               goto out_err;
+
+       prio = fs_create_prio(&steering->fdb_root_ns->ns, 1, 1);
+       if (IS_ERR(prio))
+               goto out_err;
+
+       set_prio_attrs(steering->fdb_root_ns);
+       return 0;
+
+out_err:
+       cleanup_root_ns(steering->fdb_root_ns);
+       steering->fdb_root_ns = NULL;
+       return PTR_ERR(prio);
 }
 
 static int init_ingress_acl_root_ns(struct mlx5_flow_steering *steering)
@@ -1734,10 +1740,7 @@ static int init_ingress_acl_root_ns(struct mlx5_flow_steering *steering)
        /* create 1 prio*/
        prio = fs_create_prio(&steering->esw_egress_root_ns->ns, 0,
                              MLX5_TOTAL_VPORTS(steering->dev));
-       if (IS_ERR(prio))
-               return PTR_ERR(prio);
-       else
-               return 0;
+       return PTR_ERR_OR_ZERO(prio);
 }
 
 static int init_egress_acl_root_ns(struct mlx5_flow_steering *steering)
@@ -1751,10 +1754,7 @@ static int init_egress_acl_root_ns(struct mlx5_flow_steering *steering)
        /* create 1 prio*/
        prio = fs_create_prio(&steering->esw_ingress_root_ns->ns, 0,
                              MLX5_TOTAL_VPORTS(steering->dev));
-       if (IS_ERR(prio))
-               return PTR_ERR(prio);
-       else
-               return 0;
+       return PTR_ERR_OR_ZERO(prio);
 }
 
 int mlx5_init_fs(struct mlx5_core_dev *dev)
index d7ba91a..9cffb6a 100644 (file)
@@ -111,6 +111,7 @@ struct mlx5_fc_cache {
 };
 
 struct mlx5_fc {
+       struct rb_node node;
        struct list_head list;
 
        /* last{packets,bytes} members are used when calculating the delta since
index 164dc37..c2877e9 100644 (file)
@@ -32,6 +32,7 @@
 
 #include <linux/mlx5/driver.h>
 #include <linux/mlx5/fs.h>
+#include <linux/rbtree.h>
 #include "mlx5_core.h"
 #include "fs_core.h"
 #include "fs_cmd.h"
  *   elapsed, the thread will actually query the hardware.
  */
 
+static void mlx5_fc_stats_insert(struct rb_root *root, struct mlx5_fc *counter)
+{
+       struct rb_node **new = &root->rb_node;
+       struct rb_node *parent = NULL;
+
+       while (*new) {
+               struct mlx5_fc *this = container_of(*new, struct mlx5_fc, node);
+               int result = counter->id - this->id;
+
+               parent = *new;
+               if (result < 0)
+                       new = &((*new)->rb_left);
+               else
+                       new = &((*new)->rb_right);
+       }
+
+       /* Add new node and rebalance tree. */
+       rb_link_node(&counter->node, parent, new);
+       rb_insert_color(&counter->node, root);
+}
+
+static struct rb_node *mlx5_fc_stats_query(struct mlx5_core_dev *dev,
+                                          struct mlx5_fc *first,
+                                          u16 last_id)
+{
+       struct mlx5_cmd_fc_bulk *b;
+       struct rb_node *node = NULL;
+       u16 afirst_id;
+       int num;
+       int err;
+       int max_bulk = 1 << MLX5_CAP_GEN(dev, log_max_flow_counter_bulk);
+
+       /* first id must be aligned to 4 when using bulk query */
+       afirst_id = first->id & ~0x3;
+
+       /* number of counters to query inc. the last counter */
+       num = ALIGN(last_id - afirst_id + 1, 4);
+       if (num > max_bulk) {
+               num = max_bulk;
+               last_id = afirst_id + num - 1;
+       }
+
+       b = mlx5_cmd_fc_bulk_alloc(dev, afirst_id, num);
+       if (!b) {
+               mlx5_core_err(dev, "Error allocating resources for bulk query\n");
+               return NULL;
+       }
+
+       err = mlx5_cmd_fc_bulk_query(dev, b);
+       if (err) {
+               mlx5_core_err(dev, "Error doing bulk query: %d\n", err);
+               goto out;
+       }
+
+       for (node = &first->node; node; node = rb_next(node)) {
+               struct mlx5_fc *counter = rb_entry(node, struct mlx5_fc, node);
+               struct mlx5_fc_cache *c = &counter->cache;
+
+               if (counter->id > last_id)
+                       break;
+
+               mlx5_cmd_fc_bulk_get(dev, b,
+                                    counter->id, &c->packets, &c->bytes);
+       }
+
+out:
+       mlx5_cmd_fc_bulk_free(b);
+
+       return node;
+}
+
 static void mlx5_fc_stats_work(struct work_struct *work)
 {
        struct mlx5_core_dev *dev = container_of(work, struct mlx5_core_dev,
                                                 priv.fc_stats.work.work);
        struct mlx5_fc_stats *fc_stats = &dev->priv.fc_stats;
        unsigned long now = jiffies;
-       struct mlx5_fc *counter;
-       struct mlx5_fc *tmp;
-       int err = 0;
+       struct mlx5_fc *counter = NULL;
+       struct mlx5_fc *last = NULL;
+       struct rb_node *node;
+       LIST_HEAD(tmplist);
 
        spin_lock(&fc_stats->addlist_lock);
 
-       list_splice_tail_init(&fc_stats->addlist, &fc_stats->list);
+       list_splice_tail_init(&fc_stats->addlist, &tmplist);
 
-       if (!list_empty(&fc_stats->list))
+       if (!list_empty(&tmplist) || !RB_EMPTY_ROOT(&fc_stats->counters))
                queue_delayed_work(fc_stats->wq, &fc_stats->work, MLX5_FC_STATS_PERIOD);
 
        spin_unlock(&fc_stats->addlist_lock);
 
-       list_for_each_entry_safe(counter, tmp, &fc_stats->list, list) {
-               struct mlx5_fc_cache *c = &counter->cache;
-               u64 packets;
-               u64 bytes;
+       list_for_each_entry(counter, &tmplist, list)
+               mlx5_fc_stats_insert(&fc_stats->counters, counter);
+
+       node = rb_first(&fc_stats->counters);
+       while (node) {
+               counter = rb_entry(node, struct mlx5_fc, node);
+
+               node = rb_next(node);
 
                if (counter->deleted) {
-                       list_del(&counter->list);
+                       rb_erase(&counter->node, &fc_stats->counters);
 
                        mlx5_cmd_fc_free(dev, counter->id);
 
@@ -101,26 +178,20 @@ static void mlx5_fc_stats_work(struct work_struct *work)
                        continue;
                }
 
-               if (time_before(now, fc_stats->next_query))
-                       continue;
+               last = counter;
+       }
 
-               err = mlx5_cmd_fc_query(dev, counter->id, &packets, &bytes);
-               if (err) {
-                       pr_err("Error querying stats for counter id %d\n",
-                              counter->id);
-                       continue;
-               }
+       if (time_before(now, fc_stats->next_query) || !last)
+               return;
 
-               if (packets == c->packets)
-                       continue;
+       node = rb_first(&fc_stats->counters);
+       while (node) {
+               counter = rb_entry(node, struct mlx5_fc, node);
 
-               c->lastuse = jiffies;
-               c->packets = packets;
-               c->bytes   = bytes;
+               node = mlx5_fc_stats_query(dev, counter, last->id);
        }
 
-       if (time_after_eq(now, fc_stats->next_query))
-               fc_stats->next_query = now + MLX5_FC_STATS_PERIOD;
+       fc_stats->next_query = now + MLX5_FC_STATS_PERIOD;
 }
 
 struct mlx5_fc *mlx5_fc_create(struct mlx5_core_dev *dev, bool aging)
@@ -176,7 +247,7 @@ int mlx5_init_fc_stats(struct mlx5_core_dev *dev)
 {
        struct mlx5_fc_stats *fc_stats = &dev->priv.fc_stats;
 
-       INIT_LIST_HEAD(&fc_stats->list);
+       fc_stats->counters = RB_ROOT;
        INIT_LIST_HEAD(&fc_stats->addlist);
        spin_lock_init(&fc_stats->addlist_lock);
 
@@ -194,20 +265,32 @@ void mlx5_cleanup_fc_stats(struct mlx5_core_dev *dev)
        struct mlx5_fc_stats *fc_stats = &dev->priv.fc_stats;
        struct mlx5_fc *counter;
        struct mlx5_fc *tmp;
+       struct rb_node *node;
 
        cancel_delayed_work_sync(&dev->priv.fc_stats.work);
        destroy_workqueue(dev->priv.fc_stats.wq);
        dev->priv.fc_stats.wq = NULL;
 
-       list_splice_tail_init(&fc_stats->addlist, &fc_stats->list);
-
-       list_for_each_entry_safe(counter, tmp, &fc_stats->list, list) {
+       list_for_each_entry_safe(counter, tmp, &fc_stats->addlist, list) {
                list_del(&counter->list);
 
                mlx5_cmd_fc_free(dev, counter->id);
 
                kfree(counter);
        }
+
+       node = rb_first(&fc_stats->counters);
+       while (node) {
+               counter = rb_entry(node, struct mlx5_fc, node);
+
+               node = rb_next(node);
+
+               rb_erase(&counter->node, &fc_stats->counters);
+
+               mlx5_cmd_fc_free(dev, counter->id);
+
+               kfree(counter);
+       }
 }
 
 void mlx5_fc_query_cached(struct mlx5_fc *counter,
index 05de772..e25a73e 100644 (file)
@@ -72,8 +72,8 @@ static int mlx5e_vxlan_core_del_port_cmd(struct mlx5_core_dev *mdev, u16 port)
        u32 in[MLX5_ST_SZ_DW(delete_vxlan_udp_dport_in)];
        u32 out[MLX5_ST_SZ_DW(delete_vxlan_udp_dport_out)];
 
-       memset(&in, 0, sizeof(in));
-       memset(&out, 0, sizeof(out));
+       memset(in, 0, sizeof(in));
+       memset(out, 0, sizeof(out));
 
        MLX5_SET(delete_vxlan_udp_dport_in, in, opcode,
                 MLX5_CMD_OP_DELETE_VXLAN_UDP_DPORT);
index 01ae548..480a3ba 100644 (file)
@@ -58,6 +58,7 @@
 #include <linux/workqueue.h>
 #include <asm/byteorder.h>
 #include <net/devlink.h>
+#include <trace/events/devlink.h>
 
 #include "core.h"
 #include "item.h"
@@ -447,6 +448,10 @@ static int mlxsw_emad_transmit(struct mlxsw_core *mlxsw_core,
        if (!skb)
                return -ENOMEM;
 
+       trace_devlink_hwmsg(priv_to_devlink(mlxsw_core), false, 0,
+                           skb->data + mlxsw_core->driver->txhdr_len,
+                           skb->len - mlxsw_core->driver->txhdr_len);
+
        atomic_set(&trans->active, 1);
        err = mlxsw_core_skb_transmit(mlxsw_core, skb, &trans->tx_info);
        if (err) {
@@ -529,6 +534,9 @@ static void mlxsw_emad_rx_listener_func(struct sk_buff *skb, u8 local_port,
        struct mlxsw_core *mlxsw_core = priv;
        struct mlxsw_reg_trans *trans;
 
+       trace_devlink_hwmsg(priv_to_devlink(mlxsw_core), true, 0,
+                           skb->data, skb->len);
+
        if (!mlxsw_emad_is_resp(skb))
                goto free_skb;
 
@@ -1110,14 +1118,14 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
        if (err)
                goto err_emad_init;
 
-       err = mlxsw_hwmon_init(mlxsw_core, mlxsw_bus_info, &mlxsw_core->hwmon);
-       if (err)
-               goto err_hwmon_init;
-
        err = devlink_register(devlink, mlxsw_bus_info->dev);
        if (err)
                goto err_devlink_register;
 
+       err = mlxsw_hwmon_init(mlxsw_core, mlxsw_bus_info, &mlxsw_core->hwmon);
+       if (err)
+               goto err_hwmon_init;
+
        err = mlxsw_driver->init(mlxsw_core, mlxsw_bus_info);
        if (err)
                goto err_driver_init;
@@ -1131,9 +1139,9 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
 err_debugfs_init:
        mlxsw_core->driver->fini(mlxsw_core);
 err_driver_init:
+err_hwmon_init:
        devlink_unregister(devlink);
 err_devlink_register:
-err_hwmon_init:
        mlxsw_emad_fini(mlxsw_core);
 err_emad_init:
        mlxsw_bus->fini(bus_priv);
index 0cc1485..5b2a0b9 100644 (file)
@@ -2503,6 +2503,7 @@ MLXSW_ITEM32(reg, ppcnt, pnat, 0x00, 14, 2);
 enum mlxsw_reg_ppcnt_grp {
        MLXSW_REG_PPCNT_IEEE_8023_CNT = 0x0,
        MLXSW_REG_PPCNT_PRIO_CNT = 0x10,
+       MLXSW_REG_PPCNT_TC_CNT = 0x11,
 };
 
 /* reg_ppcnt_grp
@@ -2703,6 +2704,23 @@ MLXSW_ITEM64(reg, ppcnt, tx_pause_duration, 0x08 + 0x68, 0, 64);
  */
 MLXSW_ITEM64(reg, ppcnt, tx_pause_transition, 0x08 + 0x70, 0, 64);
 
+/* Ethernet Per Traffic Group Counters */
+
+/* reg_ppcnt_tc_transmit_queue
+ * Contains the transmit queue depth in cells of traffic class
+ * selected by prio_tc and the port selected by local_port.
+ * The field cannot be cleared.
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, tc_transmit_queue, 0x08 + 0x00, 0, 64);
+
+/* reg_ppcnt_tc_no_buffer_discard_uc
+ * The number of unicast packets dropped due to lack of shared
+ * buffer resources.
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, tc_no_buffer_discard_uc, 0x08 + 0x08, 0, 64);
+
 static inline void mlxsw_reg_ppcnt_pack(char *payload, u8 local_port,
                                        enum mlxsw_reg_ppcnt_grp grp,
                                        u8 prio_tc)
@@ -2721,7 +2739,7 @@ static inline void mlxsw_reg_ppcnt_pack(char *payload, u8 local_port,
  * Configures the switch priority to buffer table.
  */
 #define MLXSW_REG_PPTB_ID 0x500B
-#define MLXSW_REG_PPTB_LEN 0x0C
+#define MLXSW_REG_PPTB_LEN 0x10
 
 static const struct mlxsw_reg_info mlxsw_reg_pptb = {
        .id = MLXSW_REG_PPTB_ID,
@@ -2787,6 +2805,13 @@ MLXSW_ITEM32(reg, pptb, pm_msb, 0x08, 24, 8);
  */
 MLXSW_ITEM32(reg, pptb, untagged_buff, 0x08, 0, 4);
 
+/* reg_pptb_prio_to_buff_msb
+ * Mapping of switch priority <i+8> to one of the allocated receive port
+ * buffers.
+ * Access: RW
+ */
+MLXSW_ITEM_BIT_ARRAY(reg, pptb, prio_to_buff_msb, 0x0C, 0x04, 4);
+
 #define MLXSW_REG_PPTB_ALL_PRIO 0xFF
 
 static inline void mlxsw_reg_pptb_pack(char *payload, u8 local_port)
@@ -2795,6 +2820,14 @@ static inline void mlxsw_reg_pptb_pack(char *payload, u8 local_port)
        mlxsw_reg_pptb_mm_set(payload, MLXSW_REG_PPTB_MM_UM);
        mlxsw_reg_pptb_local_port_set(payload, local_port);
        mlxsw_reg_pptb_pm_set(payload, MLXSW_REG_PPTB_ALL_PRIO);
+       mlxsw_reg_pptb_pm_msb_set(payload, MLXSW_REG_PPTB_ALL_PRIO);
+}
+
+static inline void mlxsw_reg_pptb_prio_to_buff_pack(char *payload, u8 prio,
+                                                   u8 buff)
+{
+       mlxsw_reg_pptb_prio_to_buff_set(payload, prio, buff);
+       mlxsw_reg_pptb_prio_to_buff_msb_set(payload, prio, buff);
 }
 
 /* PBMC - Port Buffer Management Control Register
index c812513..2ba8cc4 100644 (file)
@@ -173,23 +173,6 @@ static int mlxsw_sp_port_admin_status_set(struct mlxsw_sp_port *mlxsw_sp_port,
        return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(paos), paos_pl);
 }
 
-static int mlxsw_sp_port_oper_status_get(struct mlxsw_sp_port *mlxsw_sp_port,
-                                        bool *p_is_up)
-{
-       struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
-       char paos_pl[MLXSW_REG_PAOS_LEN];
-       u8 oper_status;
-       int err;
-
-       mlxsw_reg_paos_pack(paos_pl, mlxsw_sp_port->local_port, 0);
-       err = mlxsw_reg_query(mlxsw_sp->core, MLXSW_REG(paos), paos_pl);
-       if (err)
-               return err;
-       oper_status = mlxsw_reg_paos_oper_status_get(paos_pl);
-       *p_is_up = oper_status == MLXSW_PORT_ADMIN_STATUS_UP ? true : false;
-       return 0;
-}
-
 static int mlxsw_sp_port_dev_addr_set(struct mlxsw_sp_port *mlxsw_sp_port,
                                      unsigned char *addr)
 {
@@ -899,7 +882,7 @@ struct mlxsw_sp_port_hw_stats {
        u64 (*getter)(char *payload);
 };
 
-static const struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_stats[] = {
+static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_stats[] = {
        {
                .str = "a_frames_transmitted_ok",
                .getter = mlxsw_reg_ppcnt_a_frames_transmitted_ok_get,
@@ -980,6 +963,90 @@ static const struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_stats[] = {
 
 #define MLXSW_SP_PORT_HW_STATS_LEN ARRAY_SIZE(mlxsw_sp_port_hw_stats)
 
+static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_prio_stats[] = {
+       {
+               .str = "rx_octets_prio",
+               .getter = mlxsw_reg_ppcnt_rx_octets_get,
+       },
+       {
+               .str = "rx_frames_prio",
+               .getter = mlxsw_reg_ppcnt_rx_frames_get,
+       },
+       {
+               .str = "tx_octets_prio",
+               .getter = mlxsw_reg_ppcnt_tx_octets_get,
+       },
+       {
+               .str = "tx_frames_prio",
+               .getter = mlxsw_reg_ppcnt_tx_frames_get,
+       },
+       {
+               .str = "rx_pause_prio",
+               .getter = mlxsw_reg_ppcnt_rx_pause_get,
+       },
+       {
+               .str = "rx_pause_duration_prio",
+               .getter = mlxsw_reg_ppcnt_rx_pause_duration_get,
+       },
+       {
+               .str = "tx_pause_prio",
+               .getter = mlxsw_reg_ppcnt_tx_pause_get,
+       },
+       {
+               .str = "tx_pause_duration_prio",
+               .getter = mlxsw_reg_ppcnt_tx_pause_duration_get,
+       },
+};
+
+#define MLXSW_SP_PORT_HW_PRIO_STATS_LEN ARRAY_SIZE(mlxsw_sp_port_hw_prio_stats)
+
+static u64 mlxsw_reg_ppcnt_tc_transmit_queue_bytes_get(char *ppcnt_pl)
+{
+       u64 transmit_queue = mlxsw_reg_ppcnt_tc_transmit_queue_get(ppcnt_pl);
+
+       return MLXSW_SP_CELLS_TO_BYTES(transmit_queue);
+}
+
+static struct mlxsw_sp_port_hw_stats mlxsw_sp_port_hw_tc_stats[] = {
+       {
+               .str = "tc_transmit_queue_tc",
+               .getter = mlxsw_reg_ppcnt_tc_transmit_queue_bytes_get,
+       },
+       {
+               .str = "tc_no_buffer_discard_uc_tc",
+               .getter = mlxsw_reg_ppcnt_tc_no_buffer_discard_uc_get,
+       },
+};
+
+#define MLXSW_SP_PORT_HW_TC_STATS_LEN ARRAY_SIZE(mlxsw_sp_port_hw_tc_stats)
+
+#define MLXSW_SP_PORT_ETHTOOL_STATS_LEN (MLXSW_SP_PORT_HW_STATS_LEN + \
+                                        (MLXSW_SP_PORT_HW_PRIO_STATS_LEN + \
+                                         MLXSW_SP_PORT_HW_TC_STATS_LEN) * \
+                                        IEEE_8021QAZ_MAX_TCS)
+
+static void mlxsw_sp_port_get_prio_strings(u8 **p, int prio)
+{
+       int i;
+
+       for (i = 0; i < MLXSW_SP_PORT_HW_PRIO_STATS_LEN; i++) {
+               snprintf(*p, ETH_GSTRING_LEN, "%s_%d",
+                        mlxsw_sp_port_hw_prio_stats[i].str, prio);
+               *p += ETH_GSTRING_LEN;
+       }
+}
+
+static void mlxsw_sp_port_get_tc_strings(u8 **p, int tc)
+{
+       int i;
+
+       for (i = 0; i < MLXSW_SP_PORT_HW_TC_STATS_LEN; i++) {
+               snprintf(*p, ETH_GSTRING_LEN, "%s_%d",
+                        mlxsw_sp_port_hw_tc_stats[i].str, tc);
+               *p += ETH_GSTRING_LEN;
+       }
+}
+
 static void mlxsw_sp_port_get_strings(struct net_device *dev,
                                      u32 stringset, u8 *data)
 {
@@ -993,6 +1060,13 @@ static void mlxsw_sp_port_get_strings(struct net_device *dev,
                               ETH_GSTRING_LEN);
                        p += ETH_GSTRING_LEN;
                }
+
+               for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+                       mlxsw_sp_port_get_prio_strings(&p, i);
+
+               for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+                       mlxsw_sp_port_get_tc_strings(&p, i);
+
                break;
        }
 }
@@ -1020,27 +1094,80 @@ static int mlxsw_sp_port_set_phys_id(struct net_device *dev,
        return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(mlcr), mlcr_pl);
 }
 
-static void mlxsw_sp_port_get_stats(struct net_device *dev,
-                                   struct ethtool_stats *stats, u64 *data)
+static int
+mlxsw_sp_get_hw_stats_by_group(struct mlxsw_sp_port_hw_stats **p_hw_stats,
+                              int *p_len, enum mlxsw_reg_ppcnt_grp grp)
+{
+       switch (grp) {
+       case  MLXSW_REG_PPCNT_IEEE_8023_CNT:
+               *p_hw_stats = mlxsw_sp_port_hw_stats;
+               *p_len = MLXSW_SP_PORT_HW_STATS_LEN;
+               break;
+       case MLXSW_REG_PPCNT_PRIO_CNT:
+               *p_hw_stats = mlxsw_sp_port_hw_prio_stats;
+               *p_len = MLXSW_SP_PORT_HW_PRIO_STATS_LEN;
+               break;
+       case MLXSW_REG_PPCNT_TC_CNT:
+               *p_hw_stats = mlxsw_sp_port_hw_tc_stats;
+               *p_len = MLXSW_SP_PORT_HW_TC_STATS_LEN;
+               break;
+       default:
+               WARN_ON(1);
+               return -ENOTSUPP;
+       }
+       return 0;
+}
+
+static void __mlxsw_sp_port_get_stats(struct net_device *dev,
+                                     enum mlxsw_reg_ppcnt_grp grp, int prio,
+                                     u64 *data, int data_index)
 {
        struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
        struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+       struct mlxsw_sp_port_hw_stats *hw_stats;
        char ppcnt_pl[MLXSW_REG_PPCNT_LEN];
-       int i;
+       int i, len;
        int err;
 
-       mlxsw_reg_ppcnt_pack(ppcnt_pl, mlxsw_sp_port->local_port,
-                            MLXSW_REG_PPCNT_IEEE_8023_CNT, 0);
+       err = mlxsw_sp_get_hw_stats_by_group(&hw_stats, &len, grp);
+       if (err)
+               return;
+       mlxsw_reg_ppcnt_pack(ppcnt_pl, mlxsw_sp_port->local_port, grp, prio);
        err = mlxsw_reg_query(mlxsw_sp->core, MLXSW_REG(ppcnt), ppcnt_pl);
-       for (i = 0; i < MLXSW_SP_PORT_HW_STATS_LEN; i++)
-               data[i] = !err ? mlxsw_sp_port_hw_stats[i].getter(ppcnt_pl) : 0;
+       for (i = 0; i < len; i++)
+               data[data_index + i] = !err ? hw_stats[i].getter(ppcnt_pl) : 0;
+}
+
+static void mlxsw_sp_port_get_stats(struct net_device *dev,
+                                   struct ethtool_stats *stats, u64 *data)
+{
+       int i, data_index = 0;
+
+       /* IEEE 802.3 Counters */
+       __mlxsw_sp_port_get_stats(dev, MLXSW_REG_PPCNT_IEEE_8023_CNT, 0,
+                                 data, data_index);
+       data_index = MLXSW_SP_PORT_HW_STATS_LEN;
+
+       /* Per-Priority Counters */
+       for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+               __mlxsw_sp_port_get_stats(dev, MLXSW_REG_PPCNT_PRIO_CNT, i,
+                                         data, data_index);
+               data_index += MLXSW_SP_PORT_HW_PRIO_STATS_LEN;
+       }
+
+       /* Per-TC Counters */
+       for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+               __mlxsw_sp_port_get_stats(dev, MLXSW_REG_PPCNT_TC_CNT, i,
+                                         data, data_index);
+               data_index += MLXSW_SP_PORT_HW_TC_STATS_LEN;
+       }
 }
 
 static int mlxsw_sp_port_get_sset_count(struct net_device *dev, int sset)
 {
        switch (sset) {
        case ETH_SS_STATS:
-               return MLXSW_SP_PORT_HW_STATS_LEN;
+               return MLXSW_SP_PORT_ETHTOOL_STATS_LEN;
        default:
                return -EOPNOTSUPP;
        }
@@ -1261,7 +1388,8 @@ static int mlxsw_sp_port_get_settings(struct net_device *dev,
 
        cmd->supported = mlxsw_sp_from_ptys_supported_port(eth_proto_cap) |
                         mlxsw_sp_from_ptys_supported_link(eth_proto_cap) |
-                        SUPPORTED_Pause | SUPPORTED_Asym_Pause;
+                        SUPPORTED_Pause | SUPPORTED_Asym_Pause |
+                        SUPPORTED_Autoneg;
        cmd->advertising = mlxsw_sp_from_ptys_advert_link(eth_proto_admin);
        mlxsw_sp_from_ptys_speed_duplex(netif_carrier_ok(dev),
                                        eth_proto_oper, cmd);
@@ -1320,7 +1448,6 @@ static int mlxsw_sp_port_set_settings(struct net_device *dev,
        u32 eth_proto_new;
        u32 eth_proto_cap;
        u32 eth_proto_admin;
-       bool is_up;
        int err;
 
        speed = ethtool_cmd_speed(cmd);
@@ -1352,12 +1479,7 @@ static int mlxsw_sp_port_set_settings(struct net_device *dev,
                return err;
        }
 
-       err = mlxsw_sp_port_oper_status_get(mlxsw_sp_port, &is_up);
-       if (err) {
-               netdev_err(dev, "Failed to get oper status");
-               return err;
-       }
-       if (!is_up)
+       if (!netif_running(dev))
                return 0;
 
        err = mlxsw_sp_port_admin_status_set(mlxsw_sp_port, false);
index a3720a0..074cdda 100644 (file)
@@ -194,7 +194,7 @@ static int mlxsw_sp_port_pb_prio_init(struct mlxsw_sp_port *mlxsw_sp_port)
 
        mlxsw_reg_pptb_pack(pptb_pl, mlxsw_sp_port->local_port);
        for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
-               mlxsw_reg_pptb_prio_to_buff_set(pptb_pl, i, 0);
+               mlxsw_reg_pptb_prio_to_buff_pack(pptb_pl, i, 0);
        return mlxsw_reg_write(mlxsw_sp_port->mlxsw_sp->core, MLXSW_REG(pptb),
                               pptb_pl);
 }
index 0b32366..01cfb75 100644 (file)
@@ -103,7 +103,8 @@ static int mlxsw_sp_port_pg_prio_map(struct mlxsw_sp_port *mlxsw_sp_port,
 
        mlxsw_reg_pptb_pack(pptb_pl, mlxsw_sp_port->local_port);
        for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
-               mlxsw_reg_pptb_prio_to_buff_set(pptb_pl, i, prio_tc[i]);
+               mlxsw_reg_pptb_prio_to_buff_pack(pptb_pl, i, prio_tc[i]);
+
        return mlxsw_reg_write(mlxsw_sp_port->mlxsw_sp->core, MLXSW_REG(pptb),
                               pptb_pl);
 }
@@ -249,6 +250,7 @@ static int mlxsw_sp_dcbnl_ieee_setets(struct net_device *dev,
                return err;
 
        memcpy(mlxsw_sp_port->dcb.ets, ets, sizeof(*ets));
+       mlxsw_sp_port->dcb.ets->ets_cap = IEEE_8021QAZ_MAX_TCS;
 
        return 0;
 }
@@ -351,7 +353,8 @@ static int mlxsw_sp_dcbnl_ieee_setpfc(struct net_device *dev,
        struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
        int err;
 
-       if (mlxsw_sp_port->link.tx_pause || mlxsw_sp_port->link.rx_pause) {
+       if ((mlxsw_sp_port->link.tx_pause || mlxsw_sp_port->link.rx_pause) &&
+           pfc->pfc_en) {
                netdev_err(dev, "PAUSE frames already enabled on port\n");
                return -EINVAL;
        }
@@ -371,6 +374,7 @@ static int mlxsw_sp_dcbnl_ieee_setpfc(struct net_device *dev,
        }
 
        memcpy(mlxsw_sp_port->dcb.pfc, pfc, sizeof(*pfc));
+       mlxsw_sp_port->dcb.pfc->pfc_cap = IEEE_8021QAZ_MAX_TCS;
 
        return 0;
 
index e084ea5..81418d6 100644 (file)
@@ -1803,7 +1803,7 @@ int mlxsw_sp_router_fib4_del(struct mlxsw_sp_port *mlxsw_sp_port,
                                              sizeof(fib4->dst), fib4->dst_len);
        if (!fib_entry) {
                dev_warn(mlxsw_sp->bus_info->dev, "Failed to find FIB4 entry being removed.\n");
-               return PTR_ERR(vr);
+               return -ENOENT;
        }
        mlxsw_sp_fib_entry_del(mlxsw_sp_port->mlxsw_sp, fib_entry);
        mlxsw_sp_fib_entry_remove(vr->fib, fib_entry);
index 1e74b91..88678c1 100644 (file)
@@ -2578,7 +2578,7 @@ static void nfp_net_del_vxlan_port(struct net_device *netdev,
                return;
 
        idx = nfp_net_find_vxlan_idx(nn, ti->port);
-       if (!nn->vxlan_usecnt[idx] || idx == -ENOSPC)
+       if (idx == -ENOSPC || !nn->vxlan_usecnt[idx])
                return;
 
        if (!--nn->vxlan_usecnt[idx])
index af54df5..2f4a837 100644 (file)
@@ -989,7 +989,7 @@ static void pasemi_adjust_link(struct net_device *dev)
        unsigned int flags;
        unsigned int new_flags;
 
-       if (!mac->phydev->link) {
+       if (!dev->phydev->link) {
                /* If no link, MAC speed settings don't matter. Just report
                 * link down and return.
                 */
@@ -1010,10 +1010,10 @@ static void pasemi_adjust_link(struct net_device *dev)
        new_flags = flags & ~(PAS_MAC_CFG_PCFG_HD | PAS_MAC_CFG_PCFG_SPD_M |
                              PAS_MAC_CFG_PCFG_TSR_M);
 
-       if (!mac->phydev->duplex)
+       if (!dev->phydev->duplex)
                new_flags |= PAS_MAC_CFG_PCFG_HD;
 
-       switch (mac->phydev->speed) {
+       switch (dev->phydev->speed) {
        case 1000:
                new_flags |= PAS_MAC_CFG_PCFG_SPD_1G |
                             PAS_MAC_CFG_PCFG_TSR_1G;
@@ -1027,15 +1027,15 @@ static void pasemi_adjust_link(struct net_device *dev)
                             PAS_MAC_CFG_PCFG_TSR_10M;
                break;
        default:
-               printk("Unsupported speed %d\n", mac->phydev->speed);
+               printk("Unsupported speed %d\n", dev->phydev->speed);
        }
 
        /* Print on link or speed/duplex change */
-       msg = mac->link != mac->phydev->link || flags != new_flags;
+       msg = mac->link != dev->phydev->link || flags != new_flags;
 
-       mac->duplex = mac->phydev->duplex;
-       mac->speed = mac->phydev->speed;
-       mac->link = mac->phydev->link;
+       mac->duplex = dev->phydev->duplex;
+       mac->speed = dev->phydev->speed;
+       mac->link = dev->phydev->link;
 
        if (new_flags != flags)
                write_mac_reg(mac, PAS_MAC_CFG_PCFG, new_flags);
@@ -1067,8 +1067,6 @@ static int pasemi_mac_phy_init(struct net_device *dev)
                return -ENODEV;
        }
 
-       mac->phydev = phydev;
-
        return 0;
 }
 
@@ -1198,8 +1196,8 @@ static int pasemi_mac_open(struct net_device *dev)
                goto out_rx_int;
        }
 
-       if (mac->phydev)
-               phy_start(mac->phydev);
+       if (dev->phydev)
+               phy_start(dev->phydev);
 
        setup_timer(&mac->tx->clean_timer, pasemi_mac_tx_timer,
                    (unsigned long)mac->tx);
@@ -1293,9 +1291,9 @@ static int pasemi_mac_close(struct net_device *dev)
        rxch = rx_ring(mac)->chan.chno;
        txch = tx_ring(mac)->chan.chno;
 
-       if (mac->phydev) {
-               phy_stop(mac->phydev);
-               phy_disconnect(mac->phydev);
+       if (dev->phydev) {
+               phy_stop(dev->phydev);
+               phy_disconnect(dev->phydev);
        }
 
        del_timer_sync(&mac->tx->clean_timer);
index 161c99a..7c47e26 100644 (file)
@@ -70,7 +70,6 @@ struct pasemi_mac {
        struct pci_dev *pdev;
        struct pci_dev *dma_pdev;
        struct pci_dev *iob_pdev;
-       struct phy_device *phydev;
        struct napi_struct napi;
 
        int             bufsz; /* RX ring buffer size */
index f046bfc..d0afc2b 100644 (file)
@@ -62,32 +62,6 @@ static struct {
        { "tx-1024-1518-byte-packets" },
 };
 
-static int
-pasemi_mac_ethtool_get_settings(struct net_device *netdev,
-                              struct ethtool_cmd *cmd)
-{
-       struct pasemi_mac *mac = netdev_priv(netdev);
-       struct phy_device *phydev = mac->phydev;
-
-       if (!phydev)
-               return -EOPNOTSUPP;
-
-       return phy_ethtool_gset(phydev, cmd);
-}
-
-static int
-pasemi_mac_ethtool_set_settings(struct net_device *netdev,
-                              struct ethtool_cmd *cmd)
-{
-       struct pasemi_mac *mac = netdev_priv(netdev);
-       struct phy_device *phydev = mac->phydev;
-
-       if (!phydev)
-               return -EOPNOTSUPP;
-
-       return phy_ethtool_sset(phydev, cmd);
-}
-
 static u32
 pasemi_mac_ethtool_get_msglevel(struct net_device *netdev)
 {
@@ -145,8 +119,6 @@ static void pasemi_mac_get_strings(struct net_device *netdev, u32 stringset,
 }
 
 const struct ethtool_ops pasemi_mac_ethtool_ops = {
-       .get_settings           = pasemi_mac_ethtool_get_settings,
-       .set_settings           = pasemi_mac_ethtool_set_settings,
        .get_msglevel           = pasemi_mac_ethtool_get_msglevel,
        .set_msglevel           = pasemi_mac_ethtool_set_msglevel,
        .get_link               = ethtool_op_get_link,
@@ -154,5 +126,7 @@ const struct ethtool_ops pasemi_mac_ethtool_ops = {
        .get_strings            = pasemi_mac_get_strings,
        .get_sset_count         = pasemi_mac_get_sset_count,
        .get_ethtool_stats      = pasemi_mac_get_ethtool_stats,
+       .get_link_ksettings     = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings     = phy_ethtool_set_link_ksettings,
 };
 
index 5349284..8377d02 100644 (file)
@@ -1908,7 +1908,6 @@ static int ravb_probe(struct platform_device *pdev)
 
        /* The Ether-specific entries in the device structure. */
        ndev->base_addr = res->start;
-       ndev->dma = -1;
 
        chip_id = (enum ravb_chip_id)of_device_get_match_data(&pdev->dev);
 
index 04cd39f..7bd910c 100644 (file)
@@ -2996,7 +2996,6 @@ static int sh_eth_drv_probe(struct platform_device *pdev)
        if (devno < 0)
                devno = 0;
 
-       ndev->dma = -1;
        ret = platform_get_irq(pdev, 0);
        if (ret < 0)
                goto out_release;
index 18ac52d..726b80f 100644 (file)
@@ -2195,6 +2195,12 @@ static void smc_release_datacs(struct platform_device *pdev, struct net_device *
        }
 }
 
+static const struct acpi_device_id smc91x_acpi_match[] = {
+       { "LNRO0003", 0 },
+       { }
+};
+MODULE_DEVICE_TABLE(acpi, smc91x_acpi_match);
+
 #if IS_BUILTIN(CONFIG_OF)
 static const struct of_device_id smc91x_match[] = {
        { .compatible = "smsc,lan91c94", },
@@ -2274,7 +2280,6 @@ static int smc_drv_probe(struct platform_device *pdev)
 #if IS_BUILTIN(CONFIG_OF)
        match = of_match_device(of_match_ptr(smc91x_match), &pdev->dev);
        if (match) {
-               struct device_node *np = pdev->dev.of_node;
                u32 val;
 
                /* Optional pwrdwn GPIO configured? */
@@ -2300,7 +2305,8 @@ static int smc_drv_probe(struct platform_device *pdev)
                        usleep_range(750, 1000);
 
                /* Combination of IO widths supported, default to 16-bit */
-               if (!of_property_read_u32(np, "reg-io-width", &val)) {
+               if (!device_property_read_u32(&pdev->dev, "reg-io-width",
+                                             &val)) {
                        if (val & 1)
                                lp->cfg.flags |= SMC91X_USE_8BIT;
                        if ((val == 0) || (val & 2))
@@ -2478,7 +2484,8 @@ static struct platform_driver smc_driver = {
        .driver         = {
                .name   = CARDNAME,
                .pm     = &smc_drv_pm_ops,
-               .of_match_table = of_match_ptr(smc91x_match),
+               .of_match_table   = of_match_ptr(smc91x_match),
+               .acpi_match_table = smc91x_acpi_match,
        },
 };
 
index 8594b9e..b7bfed4 100644 (file)
@@ -76,7 +76,6 @@ struct smsc9420_pdata {
        bool rx_csum;
        u32 msg_enable;
 
-       struct phy_device *phy_dev;
        struct mii_bus *mii_bus;
        int last_duplex;
        int last_carrier;
@@ -226,36 +225,10 @@ static int smsc9420_eeprom_reload(struct smsc9420_pdata *pd)
 /* Standard ioctls for mii-tool */
 static int smsc9420_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
 {
-       struct smsc9420_pdata *pd = netdev_priv(dev);
-
-       if (!netif_running(dev) || !pd->phy_dev)
+       if (!netif_running(dev) || !dev->phydev)
                return -EINVAL;
 
-       return phy_mii_ioctl(pd->phy_dev, ifr, cmd);
-}
-
-static int smsc9420_ethtool_get_settings(struct net_device *dev,
-                                        struct ethtool_cmd *cmd)
-{
-       struct smsc9420_pdata *pd = netdev_priv(dev);
-
-       if (!pd->phy_dev)
-               return -ENODEV;
-
-       cmd->maxtxpkt = 1;
-       cmd->maxrxpkt = 1;
-       return phy_ethtool_gset(pd->phy_dev, cmd);
-}
-
-static int smsc9420_ethtool_set_settings(struct net_device *dev,
-                                        struct ethtool_cmd *cmd)
-{
-       struct smsc9420_pdata *pd = netdev_priv(dev);
-
-       if (!pd->phy_dev)
-               return -ENODEV;
-
-       return phy_ethtool_sset(pd->phy_dev, cmd);
+       return phy_mii_ioctl(dev->phydev, ifr, cmd);
 }
 
 static void smsc9420_ethtool_get_drvinfo(struct net_device *netdev,
@@ -283,12 +256,10 @@ static void smsc9420_ethtool_set_msglevel(struct net_device *netdev, u32 data)
 
 static int smsc9420_ethtool_nway_reset(struct net_device *netdev)
 {
-       struct smsc9420_pdata *pd = netdev_priv(netdev);
-
-       if (!pd->phy_dev)
+       if (!netdev->phydev)
                return -ENODEV;
 
-       return phy_start_aneg(pd->phy_dev);
+       return phy_start_aneg(netdev->phydev);
 }
 
 static int smsc9420_ethtool_getregslen(struct net_device *dev)
@@ -302,7 +273,7 @@ smsc9420_ethtool_getregs(struct net_device *dev, struct ethtool_regs *regs,
                         void *buf)
 {
        struct smsc9420_pdata *pd = netdev_priv(dev);
-       struct phy_device *phy_dev = pd->phy_dev;
+       struct phy_device *phy_dev = dev->phydev;
        unsigned int i, j = 0;
        u32 *data = buf;
 
@@ -443,8 +414,6 @@ static int smsc9420_ethtool_set_eeprom(struct net_device *dev,
 }
 
 static const struct ethtool_ops smsc9420_ethtool_ops = {
-       .get_settings = smsc9420_ethtool_get_settings,
-       .set_settings = smsc9420_ethtool_set_settings,
        .get_drvinfo = smsc9420_ethtool_get_drvinfo,
        .get_msglevel = smsc9420_ethtool_get_msglevel,
        .set_msglevel = smsc9420_ethtool_set_msglevel,
@@ -456,6 +425,8 @@ static const struct ethtool_ops smsc9420_ethtool_ops = {
        .get_regs_len = smsc9420_ethtool_getregslen,
        .get_regs = smsc9420_ethtool_getregs,
        .get_ts_info = ethtool_op_get_ts_info,
+       .get_link_ksettings = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings = phy_ethtool_set_link_ksettings,
 };
 
 /* Sets the device MAC address to dev_addr */
@@ -736,7 +707,7 @@ static int smsc9420_stop(struct net_device *dev)
        ulong flags;
 
        BUG_ON(!pd);
-       BUG_ON(!pd->phy_dev);
+       BUG_ON(!dev->phydev);
 
        /* disable master interrupt */
        spin_lock_irqsave(&pd->int_lock, flags);
@@ -757,10 +728,9 @@ static int smsc9420_stop(struct net_device *dev)
 
        smsc9420_dmac_soft_reset(pd);
 
-       phy_stop(pd->phy_dev);
+       phy_stop(dev->phydev);
 
-       phy_disconnect(pd->phy_dev);
-       pd->phy_dev = NULL;
+       phy_disconnect(dev->phydev);
        mdiobus_unregister(pd->mii_bus);
        mdiobus_free(pd->mii_bus);
 
@@ -1093,7 +1063,8 @@ static void smsc9420_set_multicast_list(struct net_device *dev)
 
 static void smsc9420_phy_update_flowcontrol(struct smsc9420_pdata *pd)
 {
-       struct phy_device *phy_dev = pd->phy_dev;
+       struct net_device *dev = pd->dev;
+       struct phy_device *phy_dev = dev->phydev;
        u32 flow;
 
        if (phy_dev->duplex == DUPLEX_FULL) {
@@ -1122,7 +1093,7 @@ static void smsc9420_phy_update_flowcontrol(struct smsc9420_pdata *pd)
 static void smsc9420_phy_adjust_link(struct net_device *dev)
 {
        struct smsc9420_pdata *pd = netdev_priv(dev);
-       struct phy_device *phy_dev = pd->phy_dev;
+       struct phy_device *phy_dev = dev->phydev;
        int carrier;
 
        if (phy_dev->duplex != pd->last_duplex) {
@@ -1155,7 +1126,7 @@ static int smsc9420_mii_probe(struct net_device *dev)
        struct smsc9420_pdata *pd = netdev_priv(dev);
        struct phy_device *phydev = NULL;
 
-       BUG_ON(pd->phy_dev);
+       BUG_ON(dev->phydev);
 
        /* Device only supports internal PHY at address 1 */
        phydev = mdiobus_get_phy(pd->mii_bus, 1);
@@ -1179,7 +1150,6 @@ static int smsc9420_mii_probe(struct net_device *dev)
 
        phy_attached_info(phydev);
 
-       pd->phy_dev = phydev;
        pd->last_duplex = -1;
        pd->last_carrier = -1;
 
@@ -1440,7 +1410,7 @@ static int smsc9420_open(struct net_device *dev)
        }
 
        /* Bring the PHY up */
-       phy_start(pd->phy_dev);
+       phy_start(dev->phydev);
 
        napi_enable(&pd->napi);
 
index 0fb362d..44b630c 100644 (file)
@@ -11,11 +11,12 @@ obj-$(CONFIG_DWMAC_IPQ806X) += dwmac-ipq806x.o
 obj-$(CONFIG_DWMAC_LPC18XX)    += dwmac-lpc18xx.o
 obj-$(CONFIG_DWMAC_MESON)      += dwmac-meson.o
 obj-$(CONFIG_DWMAC_ROCKCHIP)   += dwmac-rk.o
-obj-$(CONFIG_DWMAC_SOCFPGA)    += dwmac-socfpga.o
+obj-$(CONFIG_DWMAC_SOCFPGA)    += dwmac-altr-socfpga.o
 obj-$(CONFIG_DWMAC_STI)                += dwmac-sti.o
 obj-$(CONFIG_DWMAC_SUNXI)      += dwmac-sunxi.o
 obj-$(CONFIG_DWMAC_GENERIC)    += dwmac-generic.o
 stmmac-platform-objs:= stmmac_platform.o
+dwmac-altr-socfpga-objs := altr_tse_pcs.o dwmac-socfpga.o
 
 obj-$(CONFIG_STMMAC_PCI) += stmmac-pci.o
 stmmac-pci-objs:= stmmac_pci.o
diff --git a/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c b/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.c
new file mode 100644 (file)
index 0000000..2920e2e
--- /dev/null
@@ -0,0 +1,274 @@
+/* Copyright Altera Corporation (C) 2016. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Tien Hock Loh <thloh@altera.com>
+ */
+
+#include <linux/mfd/syscon.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_net.h>
+#include <linux/phy.h>
+#include <linux/regmap.h>
+#include <linux/reset.h>
+#include <linux/stmmac.h>
+
+#include "stmmac.h"
+#include "stmmac_platform.h"
+#include "altr_tse_pcs.h"
+
+#define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII       0
+#define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII          BIT(1)
+#define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RMII           BIT(2)
+#define SYSMGR_EMACGRP_CTRL_PHYSEL_WIDTH               2
+#define SYSMGR_EMACGRP_CTRL_PHYSEL_MASK                        GENMASK(1, 0)
+
+#define TSE_PCS_CONTROL_AN_EN_MASK                     BIT(12)
+#define TSE_PCS_CONTROL_REG                            0x00
+#define TSE_PCS_CONTROL_RESTART_AN_MASK                        BIT(9)
+#define TSE_PCS_IF_MODE_REG                            0x28
+#define TSE_PCS_LINK_TIMER_0_REG                       0x24
+#define TSE_PCS_LINK_TIMER_1_REG                       0x26
+#define TSE_PCS_SIZE                                   0x40
+#define TSE_PCS_STATUS_AN_COMPLETED_MASK               BIT(5)
+#define TSE_PCS_STATUS_LINK_MASK                       0x0004
+#define TSE_PCS_STATUS_REG                             0x02
+#define TSE_PCS_SGMII_SPEED_1000                       BIT(3)
+#define TSE_PCS_SGMII_SPEED_100                                BIT(2)
+#define TSE_PCS_SGMII_SPEED_10                         0x0
+#define TSE_PCS_SW_RST_MASK                            0x8000
+#define TSE_PCS_PARTNER_ABILITY_REG                    0x0A
+#define TSE_PCS_PARTNER_DUPLEX_FULL                    0x1000
+#define TSE_PCS_PARTNER_DUPLEX_HALF                    0x0000
+#define TSE_PCS_PARTNER_DUPLEX_MASK                    0x1000
+#define TSE_PCS_PARTNER_SPEED_MASK                     GENMASK(11, 10)
+#define TSE_PCS_PARTNER_SPEED_1000                     BIT(11)
+#define TSE_PCS_PARTNER_SPEED_100                      BIT(10)
+#define TSE_PCS_PARTNER_SPEED_10                       0x0000
+#define TSE_PCS_PARTNER_SPEED_1000                     BIT(11)
+#define TSE_PCS_PARTNER_SPEED_100                      BIT(10)
+#define TSE_PCS_PARTNER_SPEED_10                       0x0000
+#define TSE_PCS_SGMII_SPEED_MASK                       GENMASK(3, 2)
+#define TSE_PCS_SGMII_LINK_TIMER_0                     0x0D40
+#define TSE_PCS_SGMII_LINK_TIMER_1                     0x0003
+#define TSE_PCS_SW_RESET_TIMEOUT                       100
+#define TSE_PCS_USE_SGMII_AN_MASK                      BIT(2)
+#define TSE_PCS_USE_SGMII_ENA                          BIT(1)
+
+#define SGMII_ADAPTER_CTRL_REG                         0x00
+#define SGMII_ADAPTER_DISABLE                          0x0001
+#define SGMII_ADAPTER_ENABLE                           0x0000
+
+#define AUTONEGO_LINK_TIMER                            20
+
+static int tse_pcs_reset(void __iomem *base, struct tse_pcs *pcs)
+{
+       int counter = 0;
+       u16 val;
+
+       val = readw(base + TSE_PCS_CONTROL_REG);
+       val |= TSE_PCS_SW_RST_MASK;
+       writew(val, base + TSE_PCS_CONTROL_REG);
+
+       while (counter < TSE_PCS_SW_RESET_TIMEOUT) {
+               val = readw(base + TSE_PCS_CONTROL_REG);
+               val &= TSE_PCS_SW_RST_MASK;
+               if (val == 0)
+                       break;
+               counter++;
+               udelay(1);
+       }
+       if (counter >= TSE_PCS_SW_RESET_TIMEOUT) {
+               dev_err(pcs->dev, "PCS could not get out of sw reset\n");
+               return -ETIMEDOUT;
+       }
+
+       return 0;
+}
+
+int tse_pcs_init(void __iomem *base, struct tse_pcs *pcs)
+{
+       int ret = 0;
+
+       writew(TSE_PCS_USE_SGMII_ENA, base + TSE_PCS_IF_MODE_REG);
+
+       writew(TSE_PCS_SGMII_LINK_TIMER_0, base + TSE_PCS_LINK_TIMER_0_REG);
+       writew(TSE_PCS_SGMII_LINK_TIMER_1, base + TSE_PCS_LINK_TIMER_1_REG);
+
+       ret = tse_pcs_reset(base, pcs);
+       if (ret == 0)
+               writew(SGMII_ADAPTER_ENABLE,
+                      pcs->sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+
+       return ret;
+}
+
+static void pcs_link_timer_callback(unsigned long data)
+{
+       u16 val = 0;
+       struct tse_pcs *pcs = (struct tse_pcs *)data;
+       void __iomem *tse_pcs_base = pcs->tse_pcs_base;
+       void __iomem *sgmii_adapter_base = pcs->sgmii_adapter_base;
+
+       val = readw(tse_pcs_base + TSE_PCS_STATUS_REG);
+       val &= TSE_PCS_STATUS_LINK_MASK;
+
+       if (val != 0) {
+               dev_dbg(pcs->dev, "Adapter: Link is established\n");
+               writew(SGMII_ADAPTER_ENABLE,
+                      sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+       } else {
+               mod_timer(&pcs->aneg_link_timer, jiffies +
+                         msecs_to_jiffies(AUTONEGO_LINK_TIMER));
+       }
+}
+
+static void auto_nego_timer_callback(unsigned long data)
+{
+       u16 val = 0;
+       u16 speed = 0;
+       u16 duplex = 0;
+       struct tse_pcs *pcs = (struct tse_pcs *)data;
+       void __iomem *tse_pcs_base = pcs->tse_pcs_base;
+       void __iomem *sgmii_adapter_base = pcs->sgmii_adapter_base;
+
+       val = readw(tse_pcs_base + TSE_PCS_STATUS_REG);
+       val &= TSE_PCS_STATUS_AN_COMPLETED_MASK;
+
+       if (val != 0) {
+               dev_dbg(pcs->dev, "Adapter: Auto Negotiation is completed\n");
+               val = readw(tse_pcs_base + TSE_PCS_PARTNER_ABILITY_REG);
+               speed = val & TSE_PCS_PARTNER_SPEED_MASK;
+               duplex = val & TSE_PCS_PARTNER_DUPLEX_MASK;
+
+               if (speed == TSE_PCS_PARTNER_SPEED_10 &&
+                   duplex == TSE_PCS_PARTNER_DUPLEX_FULL)
+                       dev_dbg(pcs->dev,
+                               "Adapter: Link Partner is Up - 10/Full\n");
+               else if (speed == TSE_PCS_PARTNER_SPEED_100 &&
+                        duplex == TSE_PCS_PARTNER_DUPLEX_FULL)
+                       dev_dbg(pcs->dev,
+                               "Adapter: Link Partner is Up - 100/Full\n");
+               else if (speed == TSE_PCS_PARTNER_SPEED_1000 &&
+                        duplex == TSE_PCS_PARTNER_DUPLEX_FULL)
+                       dev_dbg(pcs->dev,
+                               "Adapter: Link Partner is Up - 1000/Full\n");
+               else if (speed == TSE_PCS_PARTNER_SPEED_10 &&
+                        duplex == TSE_PCS_PARTNER_DUPLEX_HALF)
+                       dev_err(pcs->dev,
+                               "Adapter does not support Half Duplex\n");
+               else if (speed == TSE_PCS_PARTNER_SPEED_100 &&
+                        duplex == TSE_PCS_PARTNER_DUPLEX_HALF)
+                       dev_err(pcs->dev,
+                               "Adapter does not support Half Duplex\n");
+               else if (speed == TSE_PCS_PARTNER_SPEED_1000 &&
+                        duplex == TSE_PCS_PARTNER_DUPLEX_HALF)
+                       dev_err(pcs->dev,
+                               "Adapter does not support Half Duplex\n");
+               else
+                       dev_err(pcs->dev,
+                               "Adapter: Invalid Partner Speed and Duplex\n");
+
+               if (duplex == TSE_PCS_PARTNER_DUPLEX_FULL &&
+                   (speed == TSE_PCS_PARTNER_SPEED_10 ||
+                    speed == TSE_PCS_PARTNER_SPEED_100 ||
+                    speed == TSE_PCS_PARTNER_SPEED_1000))
+                       writew(SGMII_ADAPTER_ENABLE,
+                              sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+       } else {
+               val = readw(tse_pcs_base + TSE_PCS_CONTROL_REG);
+               val |= TSE_PCS_CONTROL_RESTART_AN_MASK;
+               writew(val, tse_pcs_base + TSE_PCS_CONTROL_REG);
+
+               tse_pcs_reset(tse_pcs_base, pcs);
+               mod_timer(&pcs->aneg_link_timer, jiffies +
+                         msecs_to_jiffies(AUTONEGO_LINK_TIMER));
+       }
+}
+
+static void aneg_link_timer_callback(unsigned long data)
+{
+       struct tse_pcs *pcs = (struct tse_pcs *)data;
+
+       if (pcs->autoneg == AUTONEG_ENABLE)
+               auto_nego_timer_callback(data);
+       else if (pcs->autoneg == AUTONEG_DISABLE)
+               pcs_link_timer_callback(data);
+}
+
+void tse_pcs_fix_mac_speed(struct tse_pcs *pcs, struct phy_device *phy_dev,
+                          unsigned int speed)
+{
+       void __iomem *tse_pcs_base = pcs->tse_pcs_base;
+       void __iomem *sgmii_adapter_base = pcs->sgmii_adapter_base;
+       u32 val;
+
+       writew(SGMII_ADAPTER_ENABLE,
+              sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+
+       pcs->autoneg = phy_dev->autoneg;
+
+       if (phy_dev->autoneg == AUTONEG_ENABLE) {
+               val = readw(tse_pcs_base + TSE_PCS_CONTROL_REG);
+               val |= TSE_PCS_CONTROL_AN_EN_MASK;
+               writew(val, tse_pcs_base + TSE_PCS_CONTROL_REG);
+
+               val = readw(tse_pcs_base + TSE_PCS_IF_MODE_REG);
+               val |= TSE_PCS_USE_SGMII_AN_MASK;
+               writew(val, tse_pcs_base + TSE_PCS_IF_MODE_REG);
+
+               val = readw(tse_pcs_base + TSE_PCS_CONTROL_REG);
+               val |= TSE_PCS_CONTROL_RESTART_AN_MASK;
+
+               tse_pcs_reset(tse_pcs_base, pcs);
+
+               setup_timer(&pcs->aneg_link_timer,
+                           aneg_link_timer_callback, (unsigned long)pcs);
+               mod_timer(&pcs->aneg_link_timer, jiffies +
+                         msecs_to_jiffies(AUTONEGO_LINK_TIMER));
+       } else if (phy_dev->autoneg == AUTONEG_DISABLE) {
+               val = readw(tse_pcs_base + TSE_PCS_CONTROL_REG);
+               val &= ~TSE_PCS_CONTROL_AN_EN_MASK;
+               writew(val, tse_pcs_base + TSE_PCS_CONTROL_REG);
+
+               val = readw(tse_pcs_base + TSE_PCS_IF_MODE_REG);
+               val &= ~TSE_PCS_USE_SGMII_AN_MASK;
+               writew(val, tse_pcs_base + TSE_PCS_IF_MODE_REG);
+
+               val = readw(tse_pcs_base + TSE_PCS_IF_MODE_REG);
+               val &= ~TSE_PCS_SGMII_SPEED_MASK;
+
+               switch (speed) {
+               case 1000:
+                       val |= TSE_PCS_SGMII_SPEED_1000;
+                       break;
+               case 100:
+                       val |= TSE_PCS_SGMII_SPEED_100;
+                       break;
+               case 10:
+                       val |= TSE_PCS_SGMII_SPEED_10;
+                       break;
+               default:
+                       return;
+               }
+               writew(val, tse_pcs_base + TSE_PCS_IF_MODE_REG);
+
+               tse_pcs_reset(tse_pcs_base, pcs);
+
+               setup_timer(&pcs->aneg_link_timer,
+                           aneg_link_timer_callback, (unsigned long)pcs);
+               mod_timer(&pcs->aneg_link_timer, jiffies +
+                         msecs_to_jiffies(AUTONEGO_LINK_TIMER));
+       }
+}
diff --git a/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h b/drivers/net/ethernet/stmicro/stmmac/altr_tse_pcs.h
new file mode 100644 (file)
index 0000000..2f58824
--- /dev/null
@@ -0,0 +1,36 @@
+/* Copyright Altera Corporation (C) 2016. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2,
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Author: Tien Hock Loh <thloh@altera.com>
+ */
+
+#ifndef __TSE_PCS_H__
+#define __TSE_PCS_H__
+
+#include <linux/phy.h>
+#include <linux/timer.h>
+
+struct tse_pcs {
+       struct device *dev;
+       void __iomem *tse_pcs_base;
+       void __iomem *sgmii_adapter_base;
+       struct timer_list aneg_link_timer;
+       int autoneg;
+};
+
+int tse_pcs_init(void __iomem *base, struct tse_pcs *pcs);
+void tse_pcs_fix_mac_speed(struct tse_pcs *pcs, struct phy_device *phy_dev,
+                          unsigned int speed);
+
+#endif /* __TSE_PCS_H__ */
index f13499f..edd20c3 100644 (file)
 #include "stmmac.h"
 #include "stmmac_platform.h"
 
+#include "altr_tse_pcs.h"
+
+#define SGMII_ADAPTER_CTRL_REG                          0x00
+#define SGMII_ADAPTER_DISABLE                           0x0001
+
 #define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII 0x0
 #define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII 0x1
 #define SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RMII 0x2
@@ -52,35 +57,46 @@ struct socfpga_dwmac {
        struct reset_control *stmmac_rst;
        void __iomem *splitter_base;
        bool f2h_ptp_ref_clk;
+       struct tse_pcs pcs;
 };
 
 static void socfpga_dwmac_fix_mac_speed(void *priv, unsigned int speed)
 {
        struct socfpga_dwmac *dwmac = (struct socfpga_dwmac *)priv;
        void __iomem *splitter_base = dwmac->splitter_base;
+       void __iomem *tse_pcs_base = dwmac->pcs.tse_pcs_base;
+       void __iomem *sgmii_adapter_base = dwmac->pcs.sgmii_adapter_base;
+       struct device *dev = dwmac->dev;
+       struct net_device *ndev = dev_get_drvdata(dev);
+       struct phy_device *phy_dev = ndev->phydev;
        u32 val;
 
-       if (!splitter_base)
-               return;
-
-       val = readl(splitter_base + EMAC_SPLITTER_CTRL_REG);
-       val &= ~EMAC_SPLITTER_CTRL_SPEED_MASK;
-
-       switch (speed) {
-       case 1000:
-               val |= EMAC_SPLITTER_CTRL_SPEED_1000;
-               break;
-       case 100:
-               val |= EMAC_SPLITTER_CTRL_SPEED_100;
-               break;
-       case 10:
-               val |= EMAC_SPLITTER_CTRL_SPEED_10;
-               break;
-       default:
-               return;
+       if ((tse_pcs_base) && (sgmii_adapter_base))
+               writew(SGMII_ADAPTER_DISABLE,
+                      sgmii_adapter_base + SGMII_ADAPTER_CTRL_REG);
+
+       if (splitter_base) {
+               val = readl(splitter_base + EMAC_SPLITTER_CTRL_REG);
+               val &= ~EMAC_SPLITTER_CTRL_SPEED_MASK;
+
+               switch (speed) {
+               case 1000:
+                       val |= EMAC_SPLITTER_CTRL_SPEED_1000;
+                       break;
+               case 100:
+                       val |= EMAC_SPLITTER_CTRL_SPEED_100;
+                       break;
+               case 10:
+                       val |= EMAC_SPLITTER_CTRL_SPEED_10;
+                       break;
+               default:
+                       return;
+               }
+               writel(val, splitter_base + EMAC_SPLITTER_CTRL_REG);
        }
 
-       writel(val, splitter_base + EMAC_SPLITTER_CTRL_REG);
+       if (tse_pcs_base && sgmii_adapter_base)
+               tse_pcs_fix_mac_speed(&dwmac->pcs, phy_dev, speed);
 }
 
 static int socfpga_dwmac_parse_data(struct socfpga_dwmac *dwmac, struct device *dev)
@@ -88,9 +104,12 @@ static int socfpga_dwmac_parse_data(struct socfpga_dwmac *dwmac, struct device *
        struct device_node *np = dev->of_node;
        struct regmap *sys_mgr_base_addr;
        u32 reg_offset, reg_shift;
-       int ret;
-       struct device_node *np_splitter;
+       int ret, index;
+       struct device_node *np_splitter = NULL;
+       struct device_node *np_sgmii_adapter = NULL;
        struct resource res_splitter;
+       struct resource res_tse_pcs;
+       struct resource res_sgmii_adapter;
 
        dwmac->interface = of_get_phy_mode(np);
 
@@ -128,6 +147,66 @@ static int socfpga_dwmac_parse_data(struct socfpga_dwmac *dwmac, struct device *
                }
        }
 
+       np_sgmii_adapter = of_parse_phandle(np,
+                                           "altr,gmii-to-sgmii-converter", 0);
+       if (np_sgmii_adapter) {
+               index = of_property_match_string(np_sgmii_adapter, "reg-names",
+                                                "hps_emac_interface_splitter_avalon_slave");
+
+               if (index >= 0) {
+                       if (of_address_to_resource(np_sgmii_adapter, index,
+                                                  &res_splitter)) {
+                               dev_err(dev,
+                                       "%s: ERROR: missing emac splitter address\n",
+                                       __func__);
+                               return -EINVAL;
+                       }
+
+                       dwmac->splitter_base =
+                           devm_ioremap_resource(dev, &res_splitter);
+
+                       if (IS_ERR(dwmac->splitter_base))
+                               return PTR_ERR(dwmac->splitter_base);
+               }
+
+               index = of_property_match_string(np_sgmii_adapter, "reg-names",
+                                                "gmii_to_sgmii_adapter_avalon_slave");
+
+               if (index >= 0) {
+                       if (of_address_to_resource(np_sgmii_adapter, index,
+                                                  &res_sgmii_adapter)) {
+                               dev_err(dev,
+                                       "%s: ERROR: failed mapping adapter\n",
+                                       __func__);
+                               return -EINVAL;
+                       }
+
+                       dwmac->pcs.sgmii_adapter_base =
+                           devm_ioremap_resource(dev, &res_sgmii_adapter);
+
+                       if (IS_ERR(dwmac->pcs.sgmii_adapter_base))
+                               return PTR_ERR(dwmac->pcs.sgmii_adapter_base);
+               }
+
+               index = of_property_match_string(np_sgmii_adapter, "reg-names",
+                                                "eth_tse_control_port");
+
+               if (index >= 0) {
+                       if (of_address_to_resource(np_sgmii_adapter, index,
+                                                  &res_tse_pcs)) {
+                               dev_err(dev,
+                                       "%s: ERROR: failed mapping tse control port\n",
+                                       __func__);
+                               return -EINVAL;
+                       }
+
+                       dwmac->pcs.tse_pcs_base =
+                           devm_ioremap_resource(dev, &res_tse_pcs);
+
+                       if (IS_ERR(dwmac->pcs.tse_pcs_base))
+                               return PTR_ERR(dwmac->pcs.tse_pcs_base);
+               }
+       }
        dwmac->reg_offset = reg_offset;
        dwmac->reg_shift = reg_shift;
        dwmac->sys_mgr_base_addr = sys_mgr_base_addr;
@@ -151,6 +230,7 @@ static int socfpga_dwmac_set_phy_mode(struct socfpga_dwmac *dwmac)
                break;
        case PHY_INTERFACE_MODE_MII:
        case PHY_INTERFACE_MODE_GMII:
+       case PHY_INTERFACE_MODE_SGMII:
                val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII;
                break;
        default:
@@ -191,6 +271,12 @@ static int socfpga_dwmac_set_phy_mode(struct socfpga_dwmac *dwmac)
         */
        if (dwmac->stmmac_rst)
                reset_control_deassert(dwmac->stmmac_rst);
+       if (phymode == PHY_INTERFACE_MODE_SGMII) {
+               if (tse_pcs_init(dwmac->pcs.tse_pcs_base, &dwmac->pcs) != 0) {
+                       dev_err(dwmac->dev, "Unable to initialize TSE PCS");
+                       return -EINVAL;
+               }
+       }
 
        return 0;
 }
@@ -225,6 +311,7 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
        plat_dat->fix_mac_speed = socfpga_dwmac_fix_mac_speed;
 
        ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+
        if (!ret) {
                struct net_device *ndev = platform_get_drvdata(pdev);
                struct stmmac_priv *stpriv = netdev_priv(ndev);
index c34111b..9f159a7 100644 (file)
@@ -2877,7 +2877,7 @@ static int dwceqos_probe(struct platform_device *pdev)
                ret = of_phy_register_fixed_link(lp->pdev->dev.of_node);
                if (ret < 0) {
                        dev_err(&pdev->dev, "invalid fixed-link");
-                       goto err_out_unregister_netdev;
+                       goto err_out_unregister_clk_notifier;
                }
 
                lp->phy_node = of_node_get(lp->pdev->dev.of_node);
@@ -2910,7 +2910,8 @@ static int dwceqos_probe(struct platform_device *pdev)
                     (unsigned long)ndev);
        tasklet_disable(&lp->tx_bdreclaim_tasklet);
 
-       lp->txtimeout_handler_wq = create_singlethread_workqueue(DRIVER_NAME);
+       lp->txtimeout_handler_wq = alloc_workqueue(DRIVER_NAME,
+                                                  WQ_MEM_RECLAIM, 0);
        INIT_WORK(&lp->txtimeout_reinit, dwceqos_reinit_for_txtimeout);
 
        platform_set_drvdata(pdev, ndev);
index e7f0b7d..9904d74 100644 (file)
@@ -48,8 +48,7 @@ config TI_DAVINCI_CPDMA
          will be called davinci_cpdma.  This is recommended.
 
 config TI_CPSW_PHY_SEL
-       bool "TI CPSW Switch Phy sel Support"
-       depends on TI_CPSW
+       bool
        ---help---
          This driver supports configuring of the phy mode connected to
          the CPSW.
index 7eef45e..d300d53 100644 (file)
@@ -205,7 +205,6 @@ struct cpmac_priv {
        dma_addr_t dma_ring;
        void __iomem *regs;
        struct mii_bus *mii_bus;
-       struct phy_device *phy;
        char phy_name[MII_BUS_ID_SIZE + 3];
        int oldlink, oldspeed, oldduplex;
        u32 msg_enable;
@@ -830,37 +829,12 @@ static void cpmac_tx_timeout(struct net_device *dev)
 
 static int cpmac_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
 {
-       struct cpmac_priv *priv = netdev_priv(dev);
-
        if (!(netif_running(dev)))
                return -EINVAL;
-       if (!priv->phy)
+       if (!dev->phydev)
                return -EINVAL;
 
-       return phy_mii_ioctl(priv->phy, ifr, cmd);
-}
-
-static int cpmac_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct cpmac_priv *priv = netdev_priv(dev);
-
-       if (priv->phy)
-               return phy_ethtool_gset(priv->phy, cmd);
-
-       return -EINVAL;
-}
-
-static int cpmac_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct cpmac_priv *priv = netdev_priv(dev);
-
-       if (!capable(CAP_NET_ADMIN))
-               return -EPERM;
-
-       if (priv->phy)
-               return phy_ethtool_sset(priv->phy, cmd);
-
-       return -EINVAL;
+       return phy_mii_ioctl(dev->phydev, ifr, cmd);
 }
 
 static void cpmac_get_ringparam(struct net_device *dev,
@@ -900,12 +874,12 @@ static void cpmac_get_drvinfo(struct net_device *dev,
 }
 
 static const struct ethtool_ops cpmac_ethtool_ops = {
-       .get_settings = cpmac_get_settings,
-       .set_settings = cpmac_set_settings,
        .get_drvinfo = cpmac_get_drvinfo,
        .get_link = ethtool_op_get_link,
        .get_ringparam = cpmac_get_ringparam,
        .set_ringparam = cpmac_set_ringparam,
+       .get_link_ksettings = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings = phy_ethtool_set_link_ksettings,
 };
 
 static void cpmac_adjust_link(struct net_device *dev)
@@ -914,16 +888,16 @@ static void cpmac_adjust_link(struct net_device *dev)
        int new_state = 0;
 
        spin_lock(&priv->lock);
-       if (priv->phy->link) {
+       if (dev->phydev->link) {
                netif_tx_start_all_queues(dev);
-               if (priv->phy->duplex != priv->oldduplex) {
+               if (dev->phydev->duplex != priv->oldduplex) {
                        new_state = 1;
-                       priv->oldduplex = priv->phy->duplex;
+                       priv->oldduplex = dev->phydev->duplex;
                }
 
-               if (priv->phy->speed != priv->oldspeed) {
+               if (dev->phydev->speed != priv->oldspeed) {
                        new_state = 1;
-                       priv->oldspeed = priv->phy->speed;
+                       priv->oldspeed = dev->phydev->speed;
                }
 
                if (!priv->oldlink) {
@@ -938,7 +912,7 @@ static void cpmac_adjust_link(struct net_device *dev)
        }
 
        if (new_state && netif_msg_link(priv) && net_ratelimit())
-               phy_print_status(priv->phy);
+               phy_print_status(dev->phydev);
 
        spin_unlock(&priv->lock);
 }
@@ -1016,8 +990,8 @@ static int cpmac_open(struct net_device *dev)
        cpmac_hw_start(dev);
 
        napi_enable(&priv->napi);
-       priv->phy->state = PHY_CHANGELINK;
-       phy_start(priv->phy);
+       dev->phydev->state = PHY_CHANGELINK;
+       phy_start(dev->phydev);
 
        return 0;
 
@@ -1032,8 +1006,10 @@ fail_desc:
                        kfree_skb(priv->rx_head[i].skb);
                }
        }
+       dma_free_coherent(&dev->dev, sizeof(struct cpmac_desc) * size,
+                         priv->desc_ring, priv->dma_ring);
+
 fail_alloc:
-       kfree(priv->desc_ring);
        iounmap(priv->regs);
 
 fail_remap:
@@ -1053,7 +1029,7 @@ static int cpmac_stop(struct net_device *dev)
 
        cancel_work_sync(&priv->reset_work);
        napi_disable(&priv->napi);
-       phy_stop(priv->phy);
+       phy_stop(dev->phydev);
 
        cpmac_hw_stop(dev);
 
@@ -1106,6 +1082,7 @@ static int cpmac_probe(struct platform_device *pdev)
        struct cpmac_priv *priv;
        struct net_device *dev;
        struct plat_cpmac_data *pdata;
+       struct phy_device *phydev = NULL;
 
        pdata = dev_get_platdata(&pdev->dev);
 
@@ -1142,7 +1119,7 @@ static int cpmac_probe(struct platform_device *pdev)
        mem = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
        if (!mem) {
                rc = -ENODEV;
-               goto out;
+               goto fail;
        }
 
        dev->irq = platform_get_irq_byname(pdev, "irq");
@@ -1162,15 +1139,15 @@ static int cpmac_probe(struct platform_device *pdev)
        snprintf(priv->phy_name, MII_BUS_ID_SIZE, PHY_ID_FMT,
                                                mdio_bus_id, phy_id);
 
-       priv->phy = phy_connect(dev, priv->phy_name, cpmac_adjust_link,
-                               PHY_INTERFACE_MODE_MII);
+       phydev = phy_connect(dev, priv->phy_name, cpmac_adjust_link,
+                            PHY_INTERFACE_MODE_MII);
 
-       if (IS_ERR(priv->phy)) {
+       if (IS_ERR(phydev)) {
                if (netif_msg_drv(priv))
                        dev_err(&pdev->dev, "Could not attach to PHY\n");
 
-               rc = PTR_ERR(priv->phy);
-               goto out;
+               rc = PTR_ERR(phydev);
+               goto fail;
        }
 
        rc = register_netdev(dev);
@@ -1189,7 +1166,6 @@ static int cpmac_probe(struct platform_device *pdev)
 
 fail:
        free_netdev(dev);
-out:
        return rc;
 }
 
index 74e6719..5b01b3f 100644 (file)
@@ -405,7 +405,6 @@ struct tc35815_local {
        spinlock_t rx_lock;
 
        struct mii_bus *mii_bus;
-       struct phy_device *phy_dev;
        int duplex;
        int speed;
        int link;
@@ -539,7 +538,7 @@ static int tc_mdio_write(struct mii_bus *bus, int mii_id, int regnum, u16 val)
 static void tc_handle_link_change(struct net_device *dev)
 {
        struct tc35815_local *lp = netdev_priv(dev);
-       struct phy_device *phydev = lp->phy_dev;
+       struct phy_device *phydev = dev->phydev;
        unsigned long flags;
        int status_change = 0;
 
@@ -645,7 +644,6 @@ static int tc_mii_probe(struct net_device *dev)
        lp->link = 0;
        lp->speed = 0;
        lp->duplex = -1;
-       lp->phy_dev = phydev;
 
        return 0;
 }
@@ -853,7 +851,7 @@ static void tc35815_remove_one(struct pci_dev *pdev)
        struct net_device *dev = pci_get_drvdata(pdev);
        struct tc35815_local *lp = netdev_priv(dev);
 
-       phy_disconnect(lp->phy_dev);
+       phy_disconnect(dev->phydev);
        mdiobus_unregister(lp->mii_bus);
        mdiobus_free(lp->mii_bus);
        unregister_netdev(dev);
@@ -1143,8 +1141,8 @@ static void tc35815_restart(struct net_device *dev)
        struct tc35815_local *lp = netdev_priv(dev);
        int ret;
 
-       if (lp->phy_dev) {
-               ret = phy_init_hw(lp->phy_dev);
+       if (dev->phydev) {
+               ret = phy_init_hw(dev->phydev);
                if (ret)
                        printk(KERN_ERR "%s: PHY init failed.\n", dev->name);
        }
@@ -1236,7 +1234,7 @@ tc35815_open(struct net_device *dev)
 
        netif_carrier_off(dev);
        /* schedule a link state check */
-       phy_start(lp->phy_dev);
+       phy_start(dev->phydev);
 
        /* We are now ready to accept transmit requeusts from
         * the queueing layer of the networking.
@@ -1819,8 +1817,8 @@ tc35815_close(struct net_device *dev)
 
        netif_stop_queue(dev);
        napi_disable(&lp->napi);
-       if (lp->phy_dev)
-               phy_stop(lp->phy_dev);
+       if (dev->phydev)
+               phy_stop(dev->phydev);
        cancel_work_sync(&lp->restart_work);
 
        /* Flush the Tx and disable Rx here. */
@@ -1946,24 +1944,6 @@ static void tc35815_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *
        strlcpy(info->bus_info, pci_name(lp->pci_dev), sizeof(info->bus_info));
 }
 
-static int tc35815_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct tc35815_local *lp = netdev_priv(dev);
-
-       if (!lp->phy_dev)
-               return -ENODEV;
-       return phy_ethtool_gset(lp->phy_dev, cmd);
-}
-
-static int tc35815_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
-       struct tc35815_local *lp = netdev_priv(dev);
-
-       if (!lp->phy_dev)
-               return -ENODEV;
-       return phy_ethtool_sset(lp->phy_dev, cmd);
-}
-
 static u32 tc35815_get_msglevel(struct net_device *dev)
 {
        struct tc35815_local *lp = netdev_priv(dev);
@@ -2013,25 +1993,23 @@ static void tc35815_get_strings(struct net_device *dev, u32 stringset, u8 *data)
 
 static const struct ethtool_ops tc35815_ethtool_ops = {
        .get_drvinfo            = tc35815_get_drvinfo,
-       .get_settings           = tc35815_get_settings,
-       .set_settings           = tc35815_set_settings,
        .get_link               = ethtool_op_get_link,
        .get_msglevel           = tc35815_get_msglevel,
        .set_msglevel           = tc35815_set_msglevel,
        .get_strings            = tc35815_get_strings,
        .get_sset_count         = tc35815_get_sset_count,
        .get_ethtool_stats      = tc35815_get_ethtool_stats,
+       .get_link_ksettings = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings = phy_ethtool_set_link_ksettings,
 };
 
 static int tc35815_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
 {
-       struct tc35815_local *lp = netdev_priv(dev);
-
        if (!netif_running(dev))
                return -EINVAL;
-       if (!lp->phy_dev)
+       if (!dev->phydev)
                return -ENODEV;
-       return phy_mii_ioctl(lp->phy_dev, rq, cmd);
+       return phy_mii_ioctl(dev->phydev, rq, cmd);
 }
 
 static void tc35815_chip_reset(struct net_device *dev)
@@ -2116,7 +2094,7 @@ static void tc35815_chip_init(struct net_device *dev)
        if (lp->chiptype == TC35815_TX4939)
                txctl &= ~Tx_EnLCarr;
        /* WORKAROUND: ignore LostCrS in full duplex operation */
-       if (!lp->phy_dev || !lp->link || lp->duplex == DUPLEX_FULL)
+       if (!dev->phydev || !lp->link || lp->duplex == DUPLEX_FULL)
                txctl &= ~Tx_EnLCarr;
        tc_writel(txctl, &tr->Tx_Ctl);
 }
@@ -2132,8 +2110,8 @@ static int tc35815_suspend(struct pci_dev *pdev, pm_message_t state)
        if (!netif_running(dev))
                return 0;
        netif_device_detach(dev);
-       if (lp->phy_dev)
-               phy_stop(lp->phy_dev);
+       if (dev->phydev)
+               phy_stop(dev->phydev);
        spin_lock_irqsave(&lp->lock, flags);
        tc35815_chip_reset(dev);
        spin_unlock_irqrestore(&lp->lock, flags);
@@ -2144,7 +2122,6 @@ static int tc35815_suspend(struct pci_dev *pdev, pm_message_t state)
 static int tc35815_resume(struct pci_dev *pdev)
 {
        struct net_device *dev = pci_get_drvdata(pdev);
-       struct tc35815_local *lp = netdev_priv(dev);
 
        pci_restore_state(pdev);
        if (!netif_running(dev))
@@ -2152,8 +2129,8 @@ static int tc35815_resume(struct pci_dev *pdev)
        pci_set_power_state(pdev, PCI_D0);
        tc35815_restart(dev);
        netif_carrier_off(dev);
-       if (lp->phy_dev)
-               phy_start(lp->phy_dev);
+       if (dev->phydev)
+               phy_start(dev->phydev);
        netif_device_attach(dev);
        return 0;
 }
index 902457e..7d06e3e 100644 (file)
@@ -332,7 +332,6 @@ struct temac_local {
        struct device *dev;
 
        /* Connection to PHY device */
-       struct phy_device *phy_dev;     /* Pointer to PHY device */
        struct device_node *phy_node;
 
        /* MDIO bus data */
index 7397087..a9bd665 100644 (file)
@@ -590,7 +590,7 @@ static void temac_device_reset(struct net_device *ndev)
 static void temac_adjust_link(struct net_device *ndev)
 {
        struct temac_local *lp = netdev_priv(ndev);
-       struct phy_device *phy = lp->phy_dev;
+       struct phy_device *phy = ndev->phydev;
        u32 mii_speed;
        int link_state;
 
@@ -843,19 +843,20 @@ static irqreturn_t ll_temac_rx_irq(int irq, void *_ndev)
 static int temac_open(struct net_device *ndev)
 {
        struct temac_local *lp = netdev_priv(ndev);
+       struct phy_device *phydev = NULL;
        int rc;
 
        dev_dbg(&ndev->dev, "temac_open()\n");
 
        if (lp->phy_node) {
-               lp->phy_dev = of_phy_connect(lp->ndev, lp->phy_node,
-                                            temac_adjust_link, 0, 0);
-               if (!lp->phy_dev) {
+               phydev = of_phy_connect(lp->ndev, lp->phy_node,
+                                       temac_adjust_link, 0, 0);
+               if (!phydev) {
                        dev_err(lp->dev, "of_phy_connect() failed\n");
                        return -ENODEV;
                }
 
-               phy_start(lp->phy_dev);
+               phy_start(phydev);
        }
 
        temac_device_reset(ndev);
@@ -872,9 +873,8 @@ static int temac_open(struct net_device *ndev)
  err_rx_irq:
        free_irq(lp->tx_irq, ndev);
  err_tx_irq:
-       if (lp->phy_dev)
-               phy_disconnect(lp->phy_dev);
-       lp->phy_dev = NULL;
+       if (phydev)
+               phy_disconnect(phydev);
        dev_err(lp->dev, "request_irq() failed\n");
        return rc;
 }
@@ -882,15 +882,15 @@ static int temac_open(struct net_device *ndev)
 static int temac_stop(struct net_device *ndev)
 {
        struct temac_local *lp = netdev_priv(ndev);
+       struct phy_device *phydev = ndev->phydev;
 
        dev_dbg(&ndev->dev, "temac_close()\n");
 
        free_irq(lp->tx_irq, ndev);
        free_irq(lp->rx_irq, ndev);
 
-       if (lp->phy_dev)
-               phy_disconnect(lp->phy_dev);
-       lp->phy_dev = NULL;
+       if (phydev)
+               phy_disconnect(phydev);
 
        temac_dma_bd_release(ndev);
 
@@ -916,15 +916,13 @@ temac_poll_controller(struct net_device *ndev)
 
 static int temac_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd)
 {
-       struct temac_local *lp = netdev_priv(ndev);
-
        if (!netif_running(ndev))
                return -EINVAL;
 
-       if (!lp->phy_dev)
+       if (!ndev->phydev)
                return -EINVAL;
 
-       return phy_mii_ioctl(lp->phy_dev, rq, cmd);
+       return phy_mii_ioctl(ndev->phydev, rq, cmd);
 }
 
 static const struct net_device_ops temac_netdev_ops = {
@@ -969,30 +967,17 @@ static const struct attribute_group temac_attr_group = {
 };
 
 /* ethtool support */
-static int temac_get_settings(struct net_device *ndev, struct ethtool_cmd *cmd)
-{
-       struct temac_local *lp = netdev_priv(ndev);
-       return phy_ethtool_gset(lp->phy_dev, cmd);
-}
-
-static int temac_set_settings(struct net_device *ndev, struct ethtool_cmd *cmd)
-{
-       struct temac_local *lp = netdev_priv(ndev);
-       return phy_ethtool_sset(lp->phy_dev, cmd);
-}
-
 static int temac_nway_reset(struct net_device *ndev)
 {
-       struct temac_local *lp = netdev_priv(ndev);
-       return phy_start_aneg(lp->phy_dev);
+       return phy_start_aneg(ndev->phydev);
 }
 
 static const struct ethtool_ops temac_ethtool_ops = {
-       .get_settings = temac_get_settings,
-       .set_settings = temac_set_settings,
        .nway_reset = temac_nway_reset,
        .get_link = ethtool_op_get_link,
        .get_ts_info = ethtool_op_get_ts_info,
+       .get_link_ksettings = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings = phy_ethtool_set_link_ksettings,
 };
 
 static int temac_of_probe(struct platform_device *op)
index 9ead4e2..af27f7d 100644 (file)
@@ -382,7 +382,6 @@ struct axidma_bd {
  * struct axienet_local - axienet private per device data
  * @ndev:      Pointer for net_device to which it will be attached.
  * @dev:       Pointer to device structure
- * @phy_dev:   Pointer to PHY device structure attached to the axienet_local
  * @phy_node:  Pointer to device node structure
  * @mii_bus:   Pointer to MII bus structure
  * @regs:      Base address for the axienet_local device address space
@@ -420,7 +419,6 @@ struct axienet_local {
        struct device *dev;
 
        /* Connection to PHY device */
-       struct phy_device *phy_dev;     /* Pointer to PHY device */
        struct device_node *phy_node;
 
        /* MDIO bus data */
index 8c7f5be..36ee7ab 100644 (file)
@@ -525,7 +525,7 @@ static void axienet_adjust_link(struct net_device *ndev)
        u32 link_state;
        u32 setspeed = 1;
        struct axienet_local *lp = netdev_priv(ndev);
-       struct phy_device *phy = lp->phy_dev;
+       struct phy_device *phy = ndev->phydev;
 
        link_state = phy->speed | (phy->duplex << 1) | phy->link;
        if (lp->last_link != link_state) {
@@ -911,6 +911,7 @@ static int axienet_open(struct net_device *ndev)
 {
        int ret, mdio_mcreg;
        struct axienet_local *lp = netdev_priv(ndev);
+       struct phy_device *phydev = NULL;
 
        dev_dbg(&ndev->dev, "axienet_open()\n");
 
@@ -934,19 +935,19 @@ static int axienet_open(struct net_device *ndev)
 
        if (lp->phy_node) {
                if (lp->phy_type == XAE_PHY_TYPE_GMII) {
-                       lp->phy_dev = of_phy_connect(lp->ndev, lp->phy_node,
-                                            axienet_adjust_link, 0,
-                                            PHY_INTERFACE_MODE_GMII);
+                       phydev = of_phy_connect(lp->ndev, lp->phy_node,
+                                               axienet_adjust_link, 0,
+                                               PHY_INTERFACE_MODE_GMII);
                } else if (lp->phy_type == XAE_PHY_TYPE_RGMII_2_0) {
-                       lp->phy_dev = of_phy_connect(lp->ndev, lp->phy_node,
-                                            axienet_adjust_link, 0,
-                                            PHY_INTERFACE_MODE_RGMII_ID);
+                       phydev = of_phy_connect(lp->ndev, lp->phy_node,
+                                               axienet_adjust_link, 0,
+                                               PHY_INTERFACE_MODE_RGMII_ID);
                }
 
-               if (!lp->phy_dev)
+               if (!phydev)
                        dev_err(lp->dev, "of_phy_connect() failed\n");
                else
-                       phy_start(lp->phy_dev);
+                       phy_start(phydev);
        }
 
        /* Enable tasklets for Axi DMA error handling */
@@ -967,9 +968,8 @@ static int axienet_open(struct net_device *ndev)
 err_rx_irq:
        free_irq(lp->tx_irq, ndev);
 err_tx_irq:
-       if (lp->phy_dev)
-               phy_disconnect(lp->phy_dev);
-       lp->phy_dev = NULL;
+       if (phydev)
+               phy_disconnect(phydev);
        tasklet_kill(&lp->dma_err_tasklet);
        dev_err(lp->dev, "request_irq() failed\n");
        return ret;
@@ -1006,9 +1006,8 @@ static int axienet_stop(struct net_device *ndev)
        free_irq(lp->tx_irq, ndev);
        free_irq(lp->rx_irq, ndev);
 
-       if (lp->phy_dev)
-               phy_disconnect(lp->phy_dev);
-       lp->phy_dev = NULL;
+       if (ndev->phydev)
+               phy_disconnect(ndev->phydev);
 
        axienet_dma_bd_release(ndev);
        return 0;
@@ -1077,51 +1076,6 @@ static const struct net_device_ops axienet_netdev_ops = {
 #endif
 };
 
-/**
- * axienet_ethtools_get_settings - Get Axi Ethernet settings related to PHY.
- * @ndev:      Pointer to net_device structure
- * @ecmd:      Pointer to ethtool_cmd structure
- *
- * This implements ethtool command for getting PHY settings. If PHY could
- * not be found, the function returns -ENODEV. This function calls the
- * relevant PHY ethtool API to get the PHY settings.
- * Issue "ethtool ethX" under linux prompt to execute this function.
- *
- * Return: 0 on success, -ENODEV if PHY doesn't exist
- */
-static int axienet_ethtools_get_settings(struct net_device *ndev,
-                                        struct ethtool_cmd *ecmd)
-{
-       struct axienet_local *lp = netdev_priv(ndev);
-       struct phy_device *phydev = lp->phy_dev;
-       if (!phydev)
-               return -ENODEV;
-       return phy_ethtool_gset(phydev, ecmd);
-}
-
-/**
- * axienet_ethtools_set_settings - Set PHY settings as passed in the argument.
- * @ndev:      Pointer to net_device structure
- * @ecmd:      Pointer to ethtool_cmd structure
- *
- * This implements ethtool command for setting various PHY settings. If PHY
- * could not be found, the function returns -ENODEV. This function calls the
- * relevant PHY ethtool API to set the PHY.
- * Issue e.g. "ethtool -s ethX speed 1000" under linux prompt to execute this
- * function.
- *
- * Return: 0 on success, -ENODEV if PHY doesn't exist
- */
-static int axienet_ethtools_set_settings(struct net_device *ndev,
-                                        struct ethtool_cmd *ecmd)
-{
-       struct axienet_local *lp = netdev_priv(ndev);
-       struct phy_device *phydev = lp->phy_dev;
-       if (!phydev)
-               return -ENODEV;
-       return phy_ethtool_sset(phydev, ecmd);
-}
-
 /**
  * axienet_ethtools_get_drvinfo - Get various Axi Ethernet driver information.
  * @ndev:      Pointer to net_device structure
@@ -1344,8 +1298,6 @@ static int axienet_ethtools_set_coalesce(struct net_device *ndev,
 }
 
 static struct ethtool_ops axienet_ethtool_ops = {
-       .get_settings   = axienet_ethtools_get_settings,
-       .set_settings   = axienet_ethtools_set_settings,
        .get_drvinfo    = axienet_ethtools_get_drvinfo,
        .get_regs_len   = axienet_ethtools_get_regs_len,
        .get_regs       = axienet_ethtools_get_regs,
@@ -1354,6 +1306,8 @@ static struct ethtool_ops axienet_ethtool_ops = {
        .set_pauseparam = axienet_ethtools_set_pauseparam,
        .get_coalesce   = axienet_ethtools_get_coalesce,
        .set_coalesce   = axienet_ethtools_set_coalesce,
+       .get_link_ksettings = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings = phy_ethtool_set_link_ksettings,
 };
 
 /**
@@ -1587,9 +1541,9 @@ static int axienet_probe(struct platform_device *pdev)
 
        /* Find the DMA node, map the DMA registers, and decode the DMA IRQs */
        np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0);
-       if (IS_ERR(np)) {
+       if (!np) {
                dev_err(&pdev->dev, "could not find DMA node\n");
-               ret = PTR_ERR(np);
+               ret = -ENODEV;
                goto free_netdev;
        }
        ret = of_address_to_resource(np, 0, &dmares);
index 5de892f..3c20e87 100644 (file)
@@ -1134,9 +1134,9 @@ static struct device_type geneve_type = {
        .name = "geneve",
 };
 
-/* Calls the ndo_add_udp_enc_port of the caller in order to
+/* Calls the ndo_udp_tunnel_add of the caller in order to
  * supply the listening GENEVE udp ports. Callers are expected
- * to implement the ndo_add_udp_enc_port.
+ * to implement the ndo_udp_tunnel_add.
  */
 static void geneve_push_rx_ports(struct net_device *dev)
 {
index 6909c32..20e0917 100644 (file)
@@ -1128,6 +1128,39 @@ static inline void netvsc_receive_inband(struct hv_device *hdev,
        }
 }
 
+static void netvsc_process_raw_pkt(struct hv_device *device,
+                                  struct vmbus_channel *channel,
+                                  struct netvsc_device *net_device,
+                                  struct net_device *ndev,
+                                  u64 request_id,
+                                  struct vmpacket_descriptor *desc)
+{
+       struct nvsp_message *nvmsg;
+
+       nvmsg = (struct nvsp_message *)((unsigned long)
+               desc + (desc->offset8 << 3));
+
+       switch (desc->type) {
+       case VM_PKT_COMP:
+               netvsc_send_completion(net_device, channel, device, desc);
+               break;
+
+       case VM_PKT_DATA_USING_XFER_PAGES:
+               netvsc_receive(net_device, channel, device, desc);
+               break;
+
+       case VM_PKT_DATA_INBAND:
+               netvsc_receive_inband(device, net_device, nvmsg);
+               break;
+
+       default:
+               netdev_err(ndev, "unhandled packet type %d, tid %llx\n",
+                          desc->type, request_id);
+               break;
+       }
+}
+
+
 void netvsc_channel_cb(void *context)
 {
        int ret;
@@ -1140,7 +1173,7 @@ void netvsc_channel_cb(void *context)
        unsigned char *buffer;
        int bufferlen = NETVSC_PACKET_SIZE;
        struct net_device *ndev;
-       struct nvsp_message *nvmsg;
+       bool need_to_commit = false;
 
        if (channel->primary_channel != NULL)
                device = channel->primary_channel->device_obj;
@@ -1154,39 +1187,36 @@ void netvsc_channel_cb(void *context)
        buffer = get_per_channel_state(channel);
 
        do {
+               desc = get_next_pkt_raw(channel);
+               if (desc != NULL) {
+                       netvsc_process_raw_pkt(device,
+                                              channel,
+                                              net_device,
+                                              ndev,
+                                              desc->trans_id,
+                                              desc);
+
+                       put_pkt_raw(channel, desc);
+                       need_to_commit = true;
+                       continue;
+               }
+               if (need_to_commit) {
+                       need_to_commit = false;
+                       commit_rd_index(channel);
+               }
+
                ret = vmbus_recvpacket_raw(channel, buffer, bufferlen,
                                           &bytes_recvd, &request_id);
                if (ret == 0) {
                        if (bytes_recvd > 0) {
                                desc = (struct vmpacket_descriptor *)buffer;
-                               nvmsg = (struct nvsp_message *)((unsigned long)
-                                        desc + (desc->offset8 << 3));
-                               switch (desc->type) {
-                               case VM_PKT_COMP:
-                                       netvsc_send_completion(net_device,
-                                                               channel,
-                                                               device, desc);
-                                       break;
-
-                               case VM_PKT_DATA_USING_XFER_PAGES:
-                                       netvsc_receive(net_device, channel,
-                                                      device, desc);
-                                       break;
-
-                               case VM_PKT_DATA_INBAND:
-                                       netvsc_receive_inband(device,
-                                                             net_device,
-                                                             nvmsg);
-                                       break;
-
-                               default:
-                                       netdev_err(ndev,
-                                                  "unhandled packet type %d, "
-                                                  "tid %llx len %d\n",
-                                                  desc->type, request_id,
-                                                  bytes_recvd);
-                                       break;
-                               }
+                               netvsc_process_raw_pkt(device,
+                                                      channel,
+                                                      net_device,
+                                                      ndev,
+                                                      request_id,
+                                                      desc);
+
 
                        } else {
                                /*
index 52c9051..1056ed1 100644 (file)
@@ -366,11 +366,7 @@ static int atusb_channel(struct ieee802154_hw *hw, u8 page, u8 channel)
        struct atusb *atusb = hw->priv;
        int ret;
 
-       /* This implicitly sets the CCA (Clear Channel Assessment) mode to 0,
-        * "Mode 3a, Carrier sense OR energy above threshold".
-        * We should probably make this configurable. @@@
-        */
-       ret = atusb_write_reg(atusb, RG_PHY_CC_CCA, channel);
+       ret = atusb_write_subreg(atusb, SR_CHANNEL, channel);
        if (ret < 0)
                return ret;
        msleep(1);      /* @@@ ugly synchronization */
index 860d4ae..0becf0a 100644 (file)
@@ -112,6 +112,12 @@ static void fakelb_hw_stop(struct ieee802154_hw *hw)
        write_unlock_bh(&fakelb_ifup_phys_lock);
 }
 
+static int
+fakelb_set_promiscuous_mode(struct ieee802154_hw *hw, const bool on)
+{
+       return 0;
+}
+
 static const struct ieee802154_ops fakelb_ops = {
        .owner = THIS_MODULE,
        .xmit_async = fakelb_hw_xmit,
@@ -119,6 +125,7 @@ static const struct ieee802154_ops fakelb_ops = {
        .set_channel = fakelb_hw_channel,
        .start = fakelb_hw_start,
        .stop = fakelb_hw_stop,
+       .set_promiscuous_mode = fakelb_set_promiscuous_mode,
 };
 
 /* Number of dummy devices to be set up by this module. */
@@ -174,6 +181,7 @@ static int fakelb_add_one(struct device *dev)
        hw->phy->current_channel = 13;
        phy->channel = hw->phy->current_channel;
 
+       hw->flags = IEEE802154_HW_PROMISCUOUS;
        hw->parent = dev;
 
        err = ieee802154_register_hw(hw);
index f446db8..7b131f8 100644 (file)
@@ -1054,6 +1054,8 @@ static irqreturn_t mrf24j40_isr(int irq, void *data)
        disable_irq_nosync(irq);
 
        devrec->irq_buf[0] = MRF24J40_READSHORT(REG_INTSTAT);
+       devrec->irq_buf[1] = 0;
+
        /* Read the interrupt status */
        ret = spi_async(devrec->spi, &devrec->irq_msg);
        if (ret) {
index 8bcd78f..0cbb935 100644 (file)
@@ -18,6 +18,7 @@
 #include <linux/rtnetlink.h>
 #include <net/genetlink.h>
 #include <net/sock.h>
+#include <net/gro_cells.h>
 
 #include <uapi/linux/if_macsec.h>
 
@@ -268,6 +269,7 @@ struct macsec_dev {
        struct net_device *real_dev;
        struct pcpu_secy_stats __percpu *stats;
        struct list_head secys;
+       struct gro_cells gro_cells;
 };
 
 /**
@@ -879,7 +881,7 @@ static void macsec_decrypt_done(struct crypto_async_request *base, int err)
        macsec_reset_skb(skb, macsec->secy.netdev);
 
        len = skb->len;
-       ret = netif_rx(skb);
+       ret = gro_cells_receive(&macsec->gro_cells, skb);
        if (ret == NET_RX_SUCCESS)
                count_rx(dev, len);
        else
@@ -1052,6 +1054,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb)
        struct pcpu_rx_sc_stats *rxsc_stats;
        struct pcpu_secy_stats *secy_stats;
        bool pulled_sci;
+       int ret;
 
        if (skb_headroom(skb) < ETH_HLEN)
                goto drop_direct;
@@ -1193,12 +1196,17 @@ deliver:
 
        if (rx_sa)
                macsec_rxsa_put(rx_sa);
-       count_rx(dev, skb->len);
+
+       ret = gro_cells_receive(&macsec->gro_cells, skb);
+       if (ret == NET_RX_SUCCESS)
+               count_rx(dev, skb->len);
+       else
+               macsec->secy.netdev->stats.rx_dropped++;
 
        rcu_read_unlock();
 
-       *pskb = skb;
-       return RX_HANDLER_ANOTHER;
+       *pskb = NULL;
+       return RX_HANDLER_CONSUMED;
 
 drop:
        macsec_rxsa_put(rx_sa);
@@ -1218,7 +1226,6 @@ nosci:
 
        list_for_each_entry_rcu(macsec, &rxd->secys, secys) {
                struct sk_buff *nskb;
-               int ret;
 
                secy_stats = this_cpu_ptr(macsec->stats);
 
@@ -2675,11 +2682,18 @@ static int macsec_dev_init(struct net_device *dev)
 {
        struct macsec_dev *macsec = macsec_priv(dev);
        struct net_device *real_dev = macsec->real_dev;
+       int err;
 
        dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
        if (!dev->tstats)
                return -ENOMEM;
 
+       err = gro_cells_init(&macsec->gro_cells, dev);
+       if (err) {
+               free_percpu(dev->tstats);
+               return err;
+       }
+
        dev->features = real_dev->features & MACSEC_FEATURES;
        dev->features |= NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE;
 
@@ -2698,6 +2712,9 @@ static int macsec_dev_init(struct net_device *dev)
 
 static void macsec_dev_uninit(struct net_device *dev)
 {
+       struct macsec_dev *macsec = macsec_priv(dev);
+
+       gro_cells_destroy(&macsec->gro_cells);
        free_percpu(dev->tstats);
 }
 
@@ -2707,8 +2724,9 @@ static netdev_features_t macsec_fix_features(struct net_device *dev,
        struct macsec_dev *macsec = macsec_priv(dev);
        struct net_device *real_dev = macsec->real_dev;
 
-       features &= real_dev->features & MACSEC_FEATURES;
-       features |= NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE;
+       features &= (real_dev->features & MACSEC_FEATURES) |
+                   NETIF_F_GSO_SOFTWARE | NETIF_F_SOFT_FEATURES;
+       features |= NETIF_F_LLTX;
 
        return features;
 }
index 95a1332..a38c0da 100644 (file)
@@ -21,6 +21,7 @@
 #include <net/rtnetlink.h>
 #include <net/sock.h>
 #include <linux/virtio_net.h>
+#include <linux/skb_array.h>
 
 /*
  * A macvtap queue is the central object of this driver, it connects
@@ -43,6 +44,7 @@ struct macvtap_queue {
        u16 queue_index;
        bool enabled;
        struct list_head next;
+       struct skb_array skb_array;
 };
 
 #define MACVTAP_FEATURES (IFF_VNET_HDR | IFF_MULTI_QUEUE)
@@ -273,6 +275,7 @@ static void macvtap_put_queue(struct macvtap_queue *q)
        rtnl_unlock();
 
        synchronize_rcu();
+       skb_array_cleanup(&q->skb_array);
        sock_put(&q->sk);
 }
 
@@ -299,6 +302,9 @@ static struct macvtap_queue *macvtap_get_queue(struct net_device *dev,
        if (!numvtaps)
                goto out;
 
+       if (numvtaps == 1)
+               goto single;
+
        /* Check if we can use flow to select a queue */
        rxq = skb_get_hash(skb);
        if (rxq) {
@@ -316,6 +322,7 @@ static struct macvtap_queue *macvtap_get_queue(struct net_device *dev,
                goto out;
        }
 
+single:
        tap = rcu_dereference(vlan->taps[0]);
 out:
        return tap;
@@ -362,7 +369,7 @@ static rx_handler_result_t macvtap_handle_frame(struct sk_buff **pskb)
        if (!q)
                return RX_HANDLER_PASS;
 
-       if (skb_queue_len(&q->sk.sk_receive_queue) >= dev->tx_queue_len)
+       if (__skb_array_full(&q->skb_array))
                goto drop;
 
        skb_push(skb, ETH_HLEN);
@@ -380,7 +387,8 @@ static rx_handler_result_t macvtap_handle_frame(struct sk_buff **pskb)
                        goto drop;
 
                if (!segs) {
-                       skb_queue_tail(&q->sk.sk_receive_queue, skb);
+                       if (skb_array_produce(&q->skb_array, skb))
+                               goto drop;
                        goto wake_up;
                }
 
@@ -389,7 +397,11 @@ static rx_handler_result_t macvtap_handle_frame(struct sk_buff **pskb)
                        struct sk_buff *nskb = segs->next;
 
                        segs->next = NULL;
-                       skb_queue_tail(&q->sk.sk_receive_queue, segs);
+                       if (skb_array_produce(&q->skb_array, segs)) {
+                               kfree_skb(segs);
+                               kfree_skb_list(nskb);
+                               break;
+                       }
                        segs = nskb;
                }
        } else {
@@ -402,7 +414,8 @@ static rx_handler_result_t macvtap_handle_frame(struct sk_buff **pskb)
                    !(features & NETIF_F_CSUM_MASK) &&
                    skb_checksum_help(skb))
                        goto drop;
-               skb_queue_tail(&q->sk.sk_receive_queue, skb);
+               if (skb_array_produce(&q->skb_array, skb))
+                       goto drop;
        }
 
 wake_up:
@@ -519,7 +532,11 @@ static void macvtap_sock_write_space(struct sock *sk)
 
 static void macvtap_sock_destruct(struct sock *sk)
 {
-       skb_queue_purge(&sk->sk_receive_queue);
+       struct macvtap_queue *q = container_of(sk, struct macvtap_queue, sk);
+       struct sk_buff *skb;
+
+       while ((skb = skb_array_consume(&q->skb_array)) != NULL)
+               kfree_skb(skb);
 }
 
 static int macvtap_open(struct inode *inode, struct file *file)
@@ -532,13 +549,13 @@ static int macvtap_open(struct inode *inode, struct file *file)
        rtnl_lock();
        dev = dev_get_by_macvtap_minor(iminor(inode));
        if (!dev)
-               goto out;
+               goto err;
 
        err = -ENOMEM;
        q = (struct macvtap_queue *)sk_alloc(net, AF_UNSPEC, GFP_KERNEL,
                                             &macvtap_proto, 0);
        if (!q)
-               goto out;
+               goto err;
 
        RCU_INIT_POINTER(q->sock.wq, &q->wq);
        init_waitqueue_head(&q->wq.wait);
@@ -562,11 +579,24 @@ static int macvtap_open(struct inode *inode, struct file *file)
        if ((dev->features & NETIF_F_HIGHDMA) && (dev->features & NETIF_F_SG))
                sock_set_flag(&q->sk, SOCK_ZEROCOPY);
 
+       err = -ENOMEM;
+       if (skb_array_init(&q->skb_array, dev->tx_queue_len, GFP_KERNEL))
+               goto err_array;
+
        err = macvtap_set_queue(dev, file, q);
        if (err)
-               sock_put(&q->sk);
+               goto err_queue;
 
-out:
+       dev_put(dev);
+
+       rtnl_unlock();
+       return err;
+
+err_queue:
+       skb_array_cleanup(&q->skb_array);
+err_array:
+       sock_put(&q->sk);
+err:
        if (dev)
                dev_put(dev);
 
@@ -592,7 +622,7 @@ static unsigned int macvtap_poll(struct file *file, poll_table * wait)
        mask = 0;
        poll_wait(file, &q->wq.wait, wait);
 
-       if (!skb_queue_empty(&q->sk.sk_receive_queue))
+       if (!skb_array_empty(&q->skb_array))
                mask |= POLLIN | POLLRDNORM;
 
        if (sock_writeable(&q->sk) ||
@@ -852,7 +882,7 @@ static ssize_t macvtap_do_read(struct macvtap_queue *q,
                                        TASK_INTERRUPTIBLE);
 
                /* Read frames from the queue */
-               skb = skb_dequeue(&q->sk.sk_receive_queue);
+               skb = skb_array_consume(&q->skb_array);
                if (skb)
                        break;
                if (noblock) {
@@ -1176,10 +1206,18 @@ static int macvtap_recvmsg(struct socket *sock, struct msghdr *m,
        return ret;
 }
 
+static int macvtap_peek_len(struct socket *sock)
+{
+       struct macvtap_queue *q = container_of(sock, struct macvtap_queue,
+                                              sock);
+       return skb_array_peek_len(&q->skb_array);
+}
+
 /* Ops structure to mimic raw sockets with tun */
 static const struct proto_ops macvtap_socket_ops = {
        .sendmsg = macvtap_sendmsg,
        .recvmsg = macvtap_recvmsg,
+       .peek_len = macvtap_peek_len,
 };
 
 /* Get an underlying socket object from tun file.  Returns error unless file is
@@ -1198,6 +1236,28 @@ struct socket *macvtap_get_socket(struct file *file)
 }
 EXPORT_SYMBOL_GPL(macvtap_get_socket);
 
+static int macvtap_queue_resize(struct macvlan_dev *vlan)
+{
+       struct net_device *dev = vlan->dev;
+       struct macvtap_queue *q;
+       struct skb_array **arrays;
+       int n = vlan->numqueues;
+       int ret, i = 0;
+
+       arrays = kmalloc(sizeof *arrays * n, GFP_KERNEL);
+       if (!arrays)
+               return -ENOMEM;
+
+       list_for_each_entry(q, &vlan->queue_list, next)
+               arrays[i++] = &q->skb_array;
+
+       ret = skb_array_resize_multiple(arrays, n,
+                                       dev->tx_queue_len, GFP_KERNEL);
+
+       kfree(arrays);
+       return ret;
+}
+
 static int macvtap_device_event(struct notifier_block *unused,
                                unsigned long event, void *ptr)
 {
@@ -1245,6 +1305,10 @@ static int macvtap_device_event(struct notifier_block *unused,
                device_destroy(&macvtap_class, devt);
                macvtap_free_minor(vlan);
                break;
+       case NETDEV_CHANGE_TX_QUEUE_LEN:
+               if (macvtap_queue_resize(vlan))
+                       return NOTIFY_BAD;
+               break;
        }
 
        return NOTIFY_DONE;
index f968294..1d7b208 100644 (file)
@@ -294,6 +294,13 @@ config INTEL_XWAY_PHY
          PEF 7061, PEF 7071 and PEF 7072 or integrated into the Intel
          SoCs xRX200, xRX300, xRX330, xRX350 and xRX550.
 
+config MDIO_HISI_FEMAC
+       tristate "Hisilicon FEMAC MDIO bus controller"
+       depends on HAS_IOMEM && OF_MDIO
+       help
+         This module provides a driver for the MDIO busses found in the
+         Hisilicon SoC that have an Fast Ethernet MAC.
+
 endif # PHYLIB
 
 config MICREL_KS8995MA
index 7158274..19e38a9 100644 (file)
@@ -47,3 +47,4 @@ obj-$(CONFIG_MDIO_BCM_UNIMAC) += mdio-bcm-unimac.o
 obj-$(CONFIG_MICROCHIP_PHY)    += microchip.o
 obj-$(CONFIG_MDIO_BCM_IPROC)   += mdio-bcm-iproc.o
 obj-$(CONFIG_INTEL_XWAY_PHY)   += intel-xway.o
+obj-$(CONFIG_MDIO_HISI_FEMAC)  += mdio-hisi-femac.o
index ec2c1ee..c2dcf02 100644 (file)
 #define MII_88E1510_GEN_CTRL_REG_1_MODE_SGMII  0x1     /* SGMII to copper */
 #define MII_88E1510_GEN_CTRL_REG_1_RESET       0x8000  /* Soft reset */
 
+#define LPA_FIBER_1000HALF     0x40
+#define LPA_FIBER_1000FULL     0x20
+
+#define LPA_PAUSE_FIBER        0x180
+#define LPA_PAUSE_ASYM_FIBER   0x100
+
+#define ADVERTISE_FIBER_1000HALF       0x40
+#define ADVERTISE_FIBER_1000FULL       0x20
+
+#define ADVERTISE_PAUSE_FIBER          0x180
+#define ADVERTISE_PAUSE_ASYM_FIBER     0x100
+
+#define REGISTER_LINK_STATUS   0x400
+#define NB_FIBER_STATS 1
+
 MODULE_DESCRIPTION("Marvell PHY driver");
 MODULE_AUTHOR("Andy Fleming");
 MODULE_LICENSE("GPL");
@@ -150,8 +165,9 @@ struct marvell_hw_stat {
 };
 
 static struct marvell_hw_stat marvell_hw_stats[] = {
-       { "phy_receive_errors", 0, 21, 16},
+       { "phy_receive_errors_copper", 0, 21, 16},
        { "phy_idle_errors", 0, 10, 8 },
+       { "phy_receive_errors_fiber", 1, 21, 16},
 };
 
 struct marvell_priv {
@@ -477,15 +493,122 @@ static int m88e1318_config_aneg(struct phy_device *phydev)
        return m88e1121_config_aneg(phydev);
 }
 
+/**
+ * ethtool_adv_to_fiber_adv_t
+ * @ethadv: the ethtool advertisement settings
+ *
+ * A small helper function that translates ethtool advertisement
+ * settings to phy autonegotiation advertisements for the
+ * MII_ADV register for fiber link.
+ */
+static inline u32 ethtool_adv_to_fiber_adv_t(u32 ethadv)
+{
+       u32 result = 0;
+
+       if (ethadv & ADVERTISED_1000baseT_Half)
+               result |= ADVERTISE_FIBER_1000HALF;
+       if (ethadv & ADVERTISED_1000baseT_Full)
+               result |= ADVERTISE_FIBER_1000FULL;
+
+       if ((ethadv & ADVERTISE_PAUSE_ASYM) && (ethadv & ADVERTISE_PAUSE_CAP))
+               result |= LPA_PAUSE_ASYM_FIBER;
+       else if (ethadv & ADVERTISE_PAUSE_CAP)
+               result |= (ADVERTISE_PAUSE_FIBER
+                          & (~ADVERTISE_PAUSE_ASYM_FIBER));
+
+       return result;
+}
+
+/**
+ * marvell_config_aneg_fiber - restart auto-negotiation or write BMCR
+ * @phydev: target phy_device struct
+ *
+ * Description: If auto-negotiation is enabled, we configure the
+ *   advertising, and then restart auto-negotiation.  If it is not
+ *   enabled, then we write the BMCR. Adapted for fiber link in
+ *   some Marvell's devices.
+ */
+static int marvell_config_aneg_fiber(struct phy_device *phydev)
+{
+       int changed = 0;
+       int err;
+       int adv, oldadv;
+       u32 advertise;
+
+       if (phydev->autoneg != AUTONEG_ENABLE)
+               return genphy_setup_forced(phydev);
+
+       /* Only allow advertising what this PHY supports */
+       phydev->advertising &= phydev->supported;
+       advertise = phydev->advertising;
+
+       /* Setup fiber advertisement */
+       adv = phy_read(phydev, MII_ADVERTISE);
+       if (adv < 0)
+               return adv;
+
+       oldadv = adv;
+       adv &= ~(ADVERTISE_FIBER_1000HALF | ADVERTISE_FIBER_1000FULL
+               | LPA_PAUSE_FIBER);
+       adv |= ethtool_adv_to_fiber_adv_t(advertise);
+
+       if (adv != oldadv) {
+               err = phy_write(phydev, MII_ADVERTISE, adv);
+               if (err < 0)
+                       return err;
+
+               changed = 1;
+       }
+
+       if (changed == 0) {
+               /* Advertisement hasn't changed, but maybe aneg was never on to
+                * begin with?  Or maybe phy was isolated?
+                */
+               int ctl = phy_read(phydev, MII_BMCR);
+
+               if (ctl < 0)
+                       return ctl;
+
+               if (!(ctl & BMCR_ANENABLE) || (ctl & BMCR_ISOLATE))
+                       changed = 1; /* do restart aneg */
+       }
+
+       /* Only restart aneg if we are advertising something different
+        * than we were before.
+        */
+       if (changed > 0)
+               changed = genphy_restart_aneg(phydev);
+
+       return changed;
+}
+
 static int m88e1510_config_aneg(struct phy_device *phydev)
 {
        int err;
 
+       err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_COPPER);
+       if (err < 0)
+               goto error;
+
+       /* Configure the copper link first */
        err = m88e1318_config_aneg(phydev);
        if (err < 0)
-               return err;
+               goto error;
 
-       return 0;
+       /* Then the fiber link */
+       err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_FIBER);
+       if (err < 0)
+               goto error;
+
+       err = marvell_config_aneg_fiber(phydev);
+       if (err < 0)
+               goto error;
+
+       return phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_COPPER);
+
+error:
+       phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_COPPER);
+       return err;
 }
 
 static int marvell_config_init(struct phy_device *phydev)
@@ -890,26 +1013,79 @@ static int m88e1145_config_init(struct phy_device *phydev)
        return 0;
 }
 
-/* marvell_read_status
+/**
+ * fiber_lpa_to_ethtool_lpa_t
+ * @lpa: value of the MII_LPA register for fiber link
+ *
+ * A small helper function that translates MII_LPA
+ * bits to ethtool LP advertisement settings.
+ */
+static u32 fiber_lpa_to_ethtool_lpa_t(u32 lpa)
+{
+       u32 result = 0;
+
+       if (lpa & LPA_FIBER_1000HALF)
+               result |= ADVERTISED_1000baseT_Half;
+       if (lpa & LPA_FIBER_1000FULL)
+               result |= ADVERTISED_1000baseT_Full;
+
+       return result;
+}
+
+/**
+ * marvell_update_link - update link status in real time in @phydev
+ * @phydev: target phy_device struct
+ *
+ * Description: Update the value in phydev->link to reflect the
+ *   current link value.
+ */
+static int marvell_update_link(struct phy_device *phydev, int fiber)
+{
+       int status;
+
+       /* Use the generic register for copper link, or specific
+        * register for fiber case */
+       if (fiber) {
+               status = phy_read(phydev, MII_M1011_PHY_STATUS);
+               if (status < 0)
+                       return status;
+
+               if ((status & REGISTER_LINK_STATUS) == 0)
+                       phydev->link = 0;
+               else
+                       phydev->link = 1;
+       } else {
+               return genphy_update_link(phydev);
+       }
+
+       return 0;
+}
+
+/* marvell_read_status_page
  *
- * Generic status code does not detect Fiber correctly!
  * Description:
  *   Check the link, then figure out the current state
  *   by comparing what we advertise with what the link partner
  *   advertises.  Start by checking the gigabit possibilities,
  *   then move on to 10/100.
  */
-static int marvell_read_status(struct phy_device *phydev)
+static int marvell_read_status_page(struct phy_device *phydev, int page)
 {
        int adv;
        int err;
        int lpa;
        int lpagb;
        int status = 0;
+       int fiber;
 
-       /* Update the link, but return if there
+       /* Detect and update the link, but return if there
         * was an error */
-       err = genphy_update_link(phydev);
+       if (page == MII_M1111_FIBER)
+               fiber = 1;
+       else
+               fiber = 0;
+
+       err = marvell_update_link(phydev, fiber);
        if (err)
                return err;
 
@@ -930,9 +1106,6 @@ static int marvell_read_status(struct phy_device *phydev)
                if (adv < 0)
                        return adv;
 
-               phydev->lp_advertising = mii_stat1000_to_ethtool_lpa_t(lpagb) |
-                                        mii_lpa_to_ethtool_lpa_t(lpa);
-
                lpa &= adv;
 
                if (status & MII_M1011_PHY_STATUS_FULLDUPLEX)
@@ -957,9 +1130,30 @@ static int marvell_read_status(struct phy_device *phydev)
                        break;
                }
 
-               if (phydev->duplex == DUPLEX_FULL) {
-                       phydev->pause = lpa & LPA_PAUSE_CAP ? 1 : 0;
-                       phydev->asym_pause = lpa & LPA_PAUSE_ASYM ? 1 : 0;
+               if (!fiber) {
+                       phydev->lp_advertising = mii_stat1000_to_ethtool_lpa_t(lpagb) |
+                                        mii_lpa_to_ethtool_lpa_t(lpa);
+
+                       if (phydev->duplex == DUPLEX_FULL) {
+                               phydev->pause = lpa & LPA_PAUSE_CAP ? 1 : 0;
+                               phydev->asym_pause = lpa & LPA_PAUSE_ASYM ? 1 : 0;
+                       }
+               } else {
+                       /* The fiber link is only 1000M capable */
+                       phydev->lp_advertising = fiber_lpa_to_ethtool_lpa_t(lpa);
+
+                       if (phydev->duplex == DUPLEX_FULL) {
+                               if (!(lpa & LPA_PAUSE_FIBER)) {
+                                       phydev->pause = 0;
+                                       phydev->asym_pause = 0;
+                               } else if ((lpa & LPA_PAUSE_ASYM_FIBER)) {
+                                       phydev->pause = 1;
+                                       phydev->asym_pause = 1;
+                               } else {
+                                       phydev->pause = 1;
+                                       phydev->asym_pause = 0;
+                               }
+                       }
                }
        } else {
                int bmcr = phy_read(phydev, MII_BMCR);
@@ -986,6 +1180,119 @@ static int marvell_read_status(struct phy_device *phydev)
        return 0;
 }
 
+/* marvell_read_status
+ *
+ * Some Marvell's phys have two modes: fiber and copper.
+ * Both need status checked.
+ * Description:
+ *   First, check the fiber link and status.
+ *   If the fiber link is down, check the copper link and status which
+ *   will be the default value if both link are down.
+ */
+static int marvell_read_status(struct phy_device *phydev)
+{
+       int err;
+
+       /* Check the fiber mode first */
+       if (phydev->supported & SUPPORTED_FIBRE) {
+               err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_FIBER);
+               if (err < 0)
+                       goto error;
+
+               err = marvell_read_status_page(phydev, MII_M1111_FIBER);
+               if (err < 0)
+                       goto error;
+
+               /* If the fiber link is up, it is the selected and used link.
+                * In this case, we need to stay in the fiber page.
+                * Please to be careful about that, avoid to restore Copper page
+                * in other functions which could break the behaviour
+                * for some fiber phy like 88E1512.
+                * */
+               if (phydev->link)
+                       return 0;
+
+               /* If fiber link is down, check and save copper mode state */
+               err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_COPPER);
+               if (err < 0)
+                       goto error;
+       }
+
+       return marvell_read_status_page(phydev, MII_M1111_COPPER);
+
+error:
+       phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_COPPER);
+       return err;
+}
+
+/* marvell_suspend
+ *
+ * Some Marvell's phys have two modes: fiber and copper.
+ * Both need to be suspended
+ */
+static int marvell_suspend(struct phy_device *phydev)
+{
+       int err;
+
+       /* Suspend the fiber mode first */
+       if (!(phydev->supported & SUPPORTED_FIBRE)) {
+               err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_FIBER);
+               if (err < 0)
+                       goto error;
+
+               /* With the page set, use the generic suspend */
+               err = genphy_suspend(phydev);
+               if (err < 0)
+                       goto error;
+
+               /* Then, the copper link */
+               err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_COPPER);
+               if (err < 0)
+                       goto error;
+       }
+
+       /* With the page set, use the generic suspend */
+       return genphy_suspend(phydev);
+
+error:
+       phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_COPPER);
+       return err;
+}
+
+/* marvell_resume
+ *
+ * Some Marvell's phys have two modes: fiber and copper.
+ * Both need to be resumed
+ */
+static int marvell_resume(struct phy_device *phydev)
+{
+       int err;
+
+       /* Resume the fiber mode first */
+       if (!(phydev->supported & SUPPORTED_FIBRE)) {
+               err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_FIBER);
+               if (err < 0)
+                       goto error;
+
+               /* With the page set, use the generic resume */
+               err = genphy_resume(phydev);
+               if (err < 0)
+                       goto error;
+
+               /* Then, the copper link */
+               err = phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_COPPER);
+               if (err < 0)
+                       goto error;
+       }
+
+       /* With the page set, use the generic resume */
+       return genphy_resume(phydev);
+
+error:
+       phy_write(phydev, MII_MARVELL_PHY_PAGE, MII_M1111_COPPER);
+       return err;
+}
+
 static int marvell_aneg_done(struct phy_device *phydev)
 {
        int retval = phy_read(phydev, MII_M1011_PHY_STATUS);
@@ -1107,7 +1414,10 @@ static int m88e1318_set_wol(struct phy_device *phydev, struct ethtool_wolinfo *w
 
 static int marvell_get_sset_count(struct phy_device *phydev)
 {
-       return ARRAY_SIZE(marvell_hw_stats);
+       if (phydev->supported & SUPPORTED_FIBRE)
+               return ARRAY_SIZE(marvell_hw_stats);
+       else
+               return ARRAY_SIZE(marvell_hw_stats) - NB_FIBER_STATS;
 }
 
 static void marvell_get_strings(struct phy_device *phydev, u8 *data)
@@ -1361,7 +1671,7 @@ static struct phy_driver marvell_drivers[] = {
                .phy_id = MARVELL_PHY_ID_88E1510,
                .phy_id_mask = MARVELL_PHY_ID_MASK,
                .name = "Marvell 88E1510",
-               .features = PHY_GBIT_FEATURES,
+               .features = PHY_GBIT_FEATURES | SUPPORTED_FIBRE,
                .flags = PHY_HAS_INTERRUPT,
                .probe = marvell_probe,
                .config_init = &m88e1510_config_init,
@@ -1370,8 +1680,8 @@ static struct phy_driver marvell_drivers[] = {
                .ack_interrupt = &marvell_ack_interrupt,
                .config_intr = &marvell_config_intr,
                .did_interrupt = &m88e1121_did_interrupt,
-               .resume = &genphy_resume,
-               .suspend = &genphy_suspend,
+               .resume = &marvell_resume,
+               .suspend = &marvell_suspend,
                .get_sset_count = marvell_get_sset_count,
                .get_strings = marvell_get_strings,
                .get_stats = marvell_get_stats,
diff --git a/drivers/net/phy/mdio-hisi-femac.c b/drivers/net/phy/mdio-hisi-femac.c
new file mode 100644 (file)
index 0000000..b03fedd
--- /dev/null
@@ -0,0 +1,166 @@
+/*
+ * Hisilicon Fast Ethernet MDIO Bus Driver
+ *
+ * Copyright (c) 2016 HiSilicon Technologies Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/clk.h>
+#include <linux/iopoll.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_mdio.h>
+#include <linux/platform_device.h>
+
+#define MDIO_RWCTRL            0x00
+#define MDIO_RO_DATA           0x04
+#define MDIO_WRITE             BIT(13)
+#define MDIO_RW_FINISH         BIT(15)
+#define BIT_PHY_ADDR_OFFSET    8
+#define BIT_WR_DATA_OFFSET     16
+
+struct hisi_femac_mdio_data {
+       struct clk *clk;
+       void __iomem *membase;
+};
+
+static int hisi_femac_mdio_wait_ready(struct hisi_femac_mdio_data *data)
+{
+       u32 val;
+
+       return readl_poll_timeout(data->membase + MDIO_RWCTRL,
+                                 val, val & MDIO_RW_FINISH, 20, 10000);
+}
+
+static int hisi_femac_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
+{
+       struct hisi_femac_mdio_data *data = bus->priv;
+       int ret;
+
+       ret = hisi_femac_mdio_wait_ready(data);
+       if (ret)
+               return ret;
+
+       writel((mii_id << BIT_PHY_ADDR_OFFSET) | regnum,
+              data->membase + MDIO_RWCTRL);
+
+       ret = hisi_femac_mdio_wait_ready(data);
+       if (ret)
+               return ret;
+
+       return readl(data->membase + MDIO_RO_DATA) & 0xFFFF;
+}
+
+static int hisi_femac_mdio_write(struct mii_bus *bus, int mii_id, int regnum,
+                                u16 value)
+{
+       struct hisi_femac_mdio_data *data = bus->priv;
+       int ret;
+
+       ret = hisi_femac_mdio_wait_ready(data);
+       if (ret)
+               return ret;
+
+       writel(MDIO_WRITE | (value << BIT_WR_DATA_OFFSET) |
+              (mii_id << BIT_PHY_ADDR_OFFSET) | regnum,
+              data->membase + MDIO_RWCTRL);
+
+       return hisi_femac_mdio_wait_ready(data);
+}
+
+static int hisi_femac_mdio_probe(struct platform_device *pdev)
+{
+       struct device_node *np = pdev->dev.of_node;
+       struct mii_bus *bus;
+       struct hisi_femac_mdio_data *data;
+       struct resource *res;
+       int ret;
+
+       bus = mdiobus_alloc_size(sizeof(*data));
+       if (!bus)
+               return -ENOMEM;
+
+       bus->name = "hisi_femac_mii_bus";
+       bus->read = &hisi_femac_mdio_read;
+       bus->write = &hisi_femac_mdio_write;
+       snprintf(bus->id, MII_BUS_ID_SIZE, "%s", pdev->name);
+       bus->parent = &pdev->dev;
+
+       data = bus->priv;
+       res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+       data->membase = devm_ioremap_resource(&pdev->dev, res);
+       if (IS_ERR(data->membase)) {
+               ret = PTR_ERR(data->membase);
+               goto err_out_free_mdiobus;
+       }
+
+       data->clk = devm_clk_get(&pdev->dev, NULL);
+       if (IS_ERR(data->clk)) {
+               ret = PTR_ERR(data->clk);
+               goto err_out_free_mdiobus;
+       }
+
+       ret = clk_prepare_enable(data->clk);
+       if (ret)
+               goto err_out_free_mdiobus;
+
+       ret = of_mdiobus_register(bus, np);
+       if (ret)
+               goto err_out_disable_clk;
+
+       platform_set_drvdata(pdev, bus);
+
+       return 0;
+
+err_out_disable_clk:
+       clk_disable_unprepare(data->clk);
+err_out_free_mdiobus:
+       mdiobus_free(bus);
+       return ret;
+}
+
+static int hisi_femac_mdio_remove(struct platform_device *pdev)
+{
+       struct mii_bus *bus = platform_get_drvdata(pdev);
+       struct hisi_femac_mdio_data *data = bus->priv;
+
+       mdiobus_unregister(bus);
+       clk_disable_unprepare(data->clk);
+       mdiobus_free(bus);
+
+       return 0;
+}
+
+static const struct of_device_id hisi_femac_mdio_dt_ids[] = {
+       { .compatible = "hisilicon,hisi-femac-mdio" },
+       { }
+};
+MODULE_DEVICE_TABLE(of, hisi_femac_mdio_dt_ids);
+
+static struct platform_driver hisi_femac_mdio_driver = {
+       .probe = hisi_femac_mdio_probe,
+       .remove = hisi_femac_mdio_remove,
+       .driver = {
+               .name = "hisi-femac-mdio",
+               .of_match_table = hisi_femac_mdio_dt_ids,
+       },
+};
+
+module_platform_driver(hisi_femac_mdio_driver);
+
+MODULE_DESCRIPTION("Hisilicon Fast Ethernet MAC MDIO interface driver");
+MODULE_AUTHOR("Dongpo Li <lidongpo@hisilicon.com>");
+MODULE_LICENSE("GPL v2");
index 5a8fefc..059f13b 100644 (file)
@@ -311,6 +311,36 @@ static int kszphy_config_init(struct phy_device *phydev)
        return 0;
 }
 
+static int ksz8041_config_init(struct phy_device *phydev)
+{
+       struct device_node *of_node = phydev->mdio.dev.of_node;
+
+       /* Limit supported and advertised modes in fiber mode */
+       if (of_property_read_bool(of_node, "micrel,fiber-mode")) {
+               phydev->dev_flags |= MICREL_PHY_FXEN;
+               phydev->supported &= SUPPORTED_FIBRE |
+                                    SUPPORTED_100baseT_Full |
+                                    SUPPORTED_100baseT_Half;
+               phydev->advertising &= ADVERTISED_FIBRE |
+                                      ADVERTISED_100baseT_Full |
+                                      ADVERTISED_100baseT_Half;
+               phydev->autoneg = AUTONEG_DISABLE;
+       }
+
+       return kszphy_config_init(phydev);
+}
+
+static int ksz8041_config_aneg(struct phy_device *phydev)
+{
+       /* Skip auto-negotiation in fiber mode */
+       if (phydev->dev_flags & MICREL_PHY_FXEN) {
+               phydev->speed = SPEED_100;
+               return 0;
+       }
+
+       return genphy_config_aneg(phydev);
+}
+
 static int ksz9021_load_values_from_of(struct phy_device *phydev,
                                       const struct device_node *of_node,
                                       u16 reg,
@@ -788,8 +818,8 @@ static struct phy_driver ksphy_driver[] = {
        .flags          = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
        .driver_data    = &ksz8041_type,
        .probe          = kszphy_probe,
-       .config_init    = kszphy_config_init,
-       .config_aneg    = genphy_config_aneg,
+       .config_init    = ksz8041_config_init,
+       .config_aneg    = ksz8041_config_aneg,
        .read_status    = genphy_read_status,
        .ack_interrupt  = kszphy_ack_interrupt,
        .config_intr    = kszphy_config_intr,
index 17953ab..f226db4 100644 (file)
@@ -2600,8 +2600,6 @@ ppp_unregister_channel(struct ppp_channel *chan)
        spin_lock_bh(&pn->all_channels_lock);
        list_del(&pch->list);
        spin_unlock_bh(&pn->all_channels_lock);
-       put_net(pch->chan_net);
-       pch->chan_net = NULL;
 
        pch->file.dead = 1;
        wake_up_interruptible(&pch->file.rwait);
@@ -3135,6 +3133,9 @@ ppp_disconnect_channel(struct channel *pch)
  */
 static void ppp_destroy_channel(struct channel *pch)
 {
+       put_net(pch->chan_net);
+       pch->chan_net = NULL;
+
        atomic_dec(&channel_count);
 
        if (!pch->file.dead) {
index 5eadb7a..9c8b5bc 100644 (file)
@@ -2503,6 +2503,9 @@ static int tun_device_event(struct notifier_block *unused,
        struct net_device *dev = netdev_notifier_info_to_dev(ptr);
        struct tun_struct *tun = netdev_priv(dev);
 
+       if (dev->rtnl_link_ops != &tun_link_ops)
+               return NOTIFY_DONE;
+
        switch (event) {
        case NETDEV_CHANGE_TX_QUEUE_LEN:
                if (tun_queue_resize(tun))
index cf77f2d..163a2c5 100644 (file)
@@ -149,24 +149,6 @@ static const struct net_device_ops ax88172a_netdev_ops = {
        .ndo_set_rx_mode        = asix_set_multicast,
 };
 
-static int ax88172a_get_settings(struct net_device *net,
-                                struct ethtool_cmd *cmd)
-{
-       if (!net->phydev)
-               return -ENODEV;
-
-       return phy_ethtool_gset(net->phydev, cmd);
-}
-
-static int ax88172a_set_settings(struct net_device *net,
-                                struct ethtool_cmd *cmd)
-{
-       if (!net->phydev)
-               return -ENODEV;
-
-       return phy_ethtool_sset(net->phydev, cmd);
-}
-
 static int ax88172a_nway_reset(struct net_device *net)
 {
        if (!net->phydev)
@@ -185,9 +167,9 @@ static const struct ethtool_ops ax88172a_ethtool_ops = {
        .get_eeprom_len         = asix_get_eeprom_len,
        .get_eeprom             = asix_get_eeprom,
        .set_eeprom             = asix_set_eeprom,
-       .get_settings           = ax88172a_get_settings,
-       .set_settings           = ax88172a_set_settings,
        .nway_reset             = ax88172a_nway_reset,
+       .get_link_ksettings     = phy_ethtool_get_link_ksettings,
+       .set_link_ksettings     = phy_ethtool_set_link_ksettings,
 };
 
 static int ax88172a_reset_phy(struct usbnet *dev, int embd_phy)
index b225bc2..f41a8ad 100644 (file)
@@ -26,6 +26,7 @@
 #include <linux/mdio.h>
 #include <linux/usb/cdc.h>
 #include <linux/suspend.h>
+#include <linux/acpi.h>
 
 /* Information for net-next */
 #define NETNEXT_VERSION                "08"
 /* SRAM_IMPEDANCE */
 #define RX_DRIVING_MASK                0x6000
 
+/* MAC PASSTHRU */
+#define AD_MASK                        0xfee0
+#define EFUSE                  0xcfdb
+#define PASS_THRU_MASK         0x1
+
 enum rtl_register_content {
        _1000bps        = 0x10,
        _100bps         = 0x08,
@@ -1040,6 +1046,65 @@ out1:
        return ret;
 }
 
+/* Devices containing RTL8153-AD can support a persistent
+ * host system provided MAC address.
+ * Examples of this are Dell TB15 and Dell WD15 docks
+ */
+static int vendor_mac_passthru_addr_read(struct r8152 *tp, struct sockaddr *sa)
+{
+       acpi_status status;
+       struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+       union acpi_object *obj;
+       int ret = -EINVAL;
+       u32 ocp_data;
+       unsigned char buf[6];
+
+       /* test for -AD variant of RTL8153 */
+       ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_MISC_0);
+       if ((ocp_data & AD_MASK) != 0x1000)
+               return -ENODEV;
+
+       /* test for MAC address pass-through bit */
+       ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, EFUSE);
+       if ((ocp_data & PASS_THRU_MASK) != 1)
+               return -ENODEV;
+
+       /* returns _AUXMAC_#AABBCCDDEEFF# */
+       status = acpi_evaluate_object(NULL, "\\_SB.AMAC", NULL, &buffer);
+       obj = (union acpi_object *)buffer.pointer;
+       if (!ACPI_SUCCESS(status))
+               return -ENODEV;
+       if (obj->type != ACPI_TYPE_BUFFER || obj->string.length != 0x17) {
+               netif_warn(tp, probe, tp->netdev,
+                          "Invalid buffer when reading pass-thru MAC addr: "
+                          "(%d, %d)\n",
+                          obj->type, obj->string.length);
+               goto amacout;
+       }
+       if (strncmp(obj->string.pointer, "_AUXMAC_#", 9) != 0 ||
+           strncmp(obj->string.pointer + 0x15, "#", 1) != 0) {
+               netif_warn(tp, probe, tp->netdev,
+                          "Invalid header when reading pass-thru MAC addr\n");
+               goto amacout;
+       }
+       ret = hex2bin(buf, obj->string.pointer + 9, 6);
+       if (!(ret == 0 && is_valid_ether_addr(buf))) {
+               netif_warn(tp, probe, tp->netdev,
+                          "Invalid MAC when reading pass-thru MAC addr: "
+                          "%d, %pM\n", ret, buf);
+               ret = -EINVAL;
+               goto amacout;
+       }
+       memcpy(sa->sa_data, buf, 6);
+       ether_addr_copy(tp->netdev->dev_addr, sa->sa_data);
+       netif_info(tp, probe, tp->netdev,
+                  "Using pass-thru MAC addr %pM\n", sa->sa_data);
+
+amacout:
+       kfree(obj);
+       return ret;
+}
+
 static int set_ethernet_addr(struct r8152 *tp)
 {
        struct net_device *dev = tp->netdev;
@@ -1048,8 +1113,15 @@ static int set_ethernet_addr(struct r8152 *tp)
 
        if (tp->version == RTL_VER_01)
                ret = pla_ocp_read(tp, PLA_IDR, 8, sa.sa_data);
-       else
-               ret = pla_ocp_read(tp, PLA_BACKUP, 8, sa.sa_data);
+       else {
+               /* if this is not an RTL8153-AD, no eFuse mac pass thru set,
+                * or system doesn't provide valid _SB.AMAC this will be
+                * be expected to non-zero
+                */
+               ret = vendor_mac_passthru_addr_read(tp, &sa);
+               if (ret < 0)
+                       ret = pla_ocp_read(tp, PLA_BACKUP, 8, sa.sa_data);
+       }
 
        if (ret < 0) {
                netif_err(tp, probe, dev, "Get ether addr fail\n");
@@ -2300,10 +2372,6 @@ static u32 __rtl_get_wol(struct r8152 *tp)
        u32 ocp_data;
        u32 wolopts = 0;
 
-       ocp_data = ocp_read_byte(tp, MCU_TYPE_PLA, PLA_CONFIG5);
-       if (!(ocp_data & LAN_WAKE_EN))
-               return 0;
-
        ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_CONFIG34);
        if (ocp_data & LINK_ON_WAKE_EN)
                wolopts |= WAKE_PHY;
@@ -2336,15 +2404,13 @@ static void __rtl_set_wol(struct r8152 *tp, u32 wolopts)
        ocp_write_word(tp, MCU_TYPE_PLA, PLA_CONFIG34, ocp_data);
 
        ocp_data = ocp_read_word(tp, MCU_TYPE_PLA, PLA_CONFIG5);
-       ocp_data &= ~(UWF_EN | BWF_EN | MWF_EN | LAN_WAKE_EN);
+       ocp_data &= ~(UWF_EN | BWF_EN | MWF_EN);
        if (wolopts & WAKE_UCAST)
                ocp_data |= UWF_EN;
        if (wolopts & WAKE_BCAST)
                ocp_data |= BWF_EN;
        if (wolopts & WAKE_MCAST)
                ocp_data |= MWF_EN;
-       if (wolopts & WAKE_ANY)
-               ocp_data |= LAN_WAKE_EN;
        ocp_write_word(tp, MCU_TYPE_PLA, PLA_CONFIG5, ocp_data);
 
        ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CRWECR, CRWECR_NORAML);
@@ -2450,27 +2516,6 @@ static void rtl8153_runtime_enable(struct r8152 *tp, bool enable)
        }
 }
 
-static void rtl_phy_reset(struct r8152 *tp)
-{
-       u16 data;
-       int i;
-
-       data = r8152_mdio_read(tp, MII_BMCR);
-
-       /* don't reset again before the previous one complete */
-       if (data & BMCR_RESET)
-               return;
-
-       data |= BMCR_RESET;
-       r8152_mdio_write(tp, MII_BMCR, data);
-
-       for (i = 0; i < 50; i++) {
-               msleep(20);
-               if ((r8152_mdio_read(tp, MII_BMCR) & BMCR_RESET) == 0)
-                       break;
-       }
-}
-
 static void r8153_teredo_off(struct r8152 *tp)
 {
        u32 ocp_data;
@@ -2850,7 +2895,6 @@ static int rtl8152_set_speed(struct r8152 *tp, u8 autoneg, u16 speed, u8 duplex)
        u16 bmcr, anar, gbcr;
        int ret = 0;
 
-       cancel_delayed_work_sync(&tp->schedule);
        anar = r8152_mdio_read(tp, MII_ADVERTISE);
        anar &= ~(ADVERTISE_10HALF | ADVERTISE_10FULL |
                  ADVERTISE_100HALF | ADVERTISE_100FULL);
@@ -3069,9 +3113,6 @@ static void rtl_work_func_t(struct work_struct *work)
            netif_carrier_ok(tp->netdev))
                napi_schedule(&tp->napi);
 
-       if (test_and_clear_bit(PHY_RESET, &tp->flags))
-               rtl_phy_reset(tp);
-
        mutex_unlock(&tp->control);
 
 out1:
@@ -3135,8 +3176,6 @@ static int rtl8152_open(struct net_device *netdev)
        if (res)
                goto out;
 
-       netif_carrier_off(netdev);
-
        res = usb_autopm_get_interface(tp->intf);
        if (res < 0) {
                free_all_mem(tp);
@@ -4388,3 +4427,4 @@ module_usb_driver(rtl8152_driver);
 MODULE_AUTHOR(DRIVER_AUTHOR);
 MODULE_DESCRIPTION(DRIVER_DESC);
 MODULE_LICENSE("GPL");
+MODULE_VERSION(DRIVER_VERSION);
index 524a47a..4f4f71b 100644 (file)
@@ -428,7 +428,11 @@ generic_rndis_bind(struct usbnet *dev, struct usb_interface *intf, int flags)
                dev_err(&intf->dev, "rndis get ethaddr, %d\n", retval);
                goto halt_fail_and_release;
        }
-       memcpy(net->dev_addr, bp, ETH_ALEN);
+
+       if (bp[0] & 0x02)
+               eth_hw_addr_random(net);
+       else
+               ether_addr_copy(net->dev_addr, bp);
 
        /* set a nonzero filter to enable data transfers */
        memset(u.set, 0, sizeof *u.set);
index 1dd08d4..1b5f531 100644 (file)
@@ -144,8 +144,10 @@ struct virtnet_info {
        /* Control VQ buffers: protected by the rtnl lock */
        struct virtio_net_ctrl_hdr ctrl_hdr;
        virtio_net_ctrl_ack ctrl_status;
+       struct virtio_net_ctrl_mq ctrl_mq;
        u8 ctrl_promisc;
        u8 ctrl_allmulti;
+       u16 ctrl_vid;
 
        /* Ethtool settings */
        u8 duplex;
@@ -1058,14 +1060,13 @@ static void virtnet_ack_link_announce(struct virtnet_info *vi)
 static int virtnet_set_queues(struct virtnet_info *vi, u16 queue_pairs)
 {
        struct scatterlist sg;
-       struct virtio_net_ctrl_mq s;
        struct net_device *dev = vi->dev;
 
        if (!vi->has_cvq || !virtio_has_feature(vi->vdev, VIRTIO_NET_F_MQ))
                return 0;
 
-       s.virtqueue_pairs = cpu_to_virtio16(vi->vdev, queue_pairs);
-       sg_init_one(&sg, &s, sizeof(s));
+       vi->ctrl_mq.virtqueue_pairs = cpu_to_virtio16(vi->vdev, queue_pairs);
+       sg_init_one(&sg, &vi->ctrl_mq, sizeof(vi->ctrl_mq));
 
        if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MQ,
                                  VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET, &sg)) {
@@ -1172,7 +1173,8 @@ static int virtnet_vlan_rx_add_vid(struct net_device *dev,
        struct virtnet_info *vi = netdev_priv(dev);
        struct scatterlist sg;
 
-       sg_init_one(&sg, &vid, sizeof(vid));
+       vi->ctrl_vid = vid;
+       sg_init_one(&sg, &vi->ctrl_vid, sizeof(vi->ctrl_vid));
 
        if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_VLAN,
                                  VIRTIO_NET_CTRL_VLAN_ADD, &sg))
@@ -1186,7 +1188,8 @@ static int virtnet_vlan_rx_kill_vid(struct net_device *dev,
        struct virtnet_info *vi = netdev_priv(dev);
        struct scatterlist sg;
 
-       sg_init_one(&sg, &vid, sizeof(vid));
+       vi->ctrl_vid = vid;
+       sg_init_one(&sg, &vi->ctrl_vid, sizeof(vi->ctrl_vid));
 
        if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_VLAN,
                                  VIRTIO_NET_CTRL_VLAN_DEL, &sg))
index ae7455d..da4e3d6 100644 (file)
@@ -2475,9 +2475,9 @@ static struct device_type vxlan_type = {
        .name = "vxlan",
 };
 
-/* Calls the ndo_add_udp_enc_port of the caller in order to
+/* Calls the ndo_udp_tunnel_add of the caller in order to
  * supply the listening VXLAN udp ports. Callers are expected
- * to implement the ndo_add_udp_enc_port.
+ * to implement the ndo_udp_tunnel_add.
  */
 static void vxlan_push_rx_ports(struct net_device *dev)
 {
index 19174ac..2fc50ec 100644 (file)
@@ -143,7 +143,7 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
        if (!priv->rx_bd_base) {
                dev_err(priv->dev, "Cannot allocate MURAM memory for RxBDs\n");
                ret = -ENOMEM;
-               goto rxbd_alloc_error;
+               goto free_uccf;
        }
 
        /* Alloc Tx BD */
@@ -154,7 +154,7 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
        if (!priv->tx_bd_base) {
                dev_err(priv->dev, "Cannot allocate MURAM memory for TxBDs\n");
                ret = -ENOMEM;
-               goto txbd_alloc_error;
+               goto free_rx_bd;
        }
 
        /* Alloc parameter ram for ucc hdlc */
@@ -164,18 +164,18 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
        if (priv->ucc_pram_offset < 0) {
                dev_err(priv->dev, "Can not allocate MURAM for hdlc prameter.\n");
                ret = -ENOMEM;
-               goto pram_alloc_error;
+               goto free_tx_bd;
        }
 
        priv->rx_skbuff = kzalloc(priv->rx_ring_size * sizeof(*priv->rx_skbuff),
                                  GFP_KERNEL);
        if (!priv->rx_skbuff)
-               goto rx_skb_alloc_error;
+               goto free_ucc_pram;
 
        priv->tx_skbuff = kzalloc(priv->tx_ring_size * sizeof(*priv->tx_skbuff),
                                  GFP_KERNEL);
        if (!priv->tx_skbuff)
-               goto tx_skb_alloc_error;
+               goto free_rx_skbuff;
 
        priv->skb_curtx = 0;
        priv->skb_dirtytx = 0;
@@ -200,14 +200,14 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
        if (riptr < 0) {
                dev_err(priv->dev, "Cannot allocate MURAM mem for Receive internal temp data pointer\n");
                ret = -ENOMEM;
-               goto riptr_alloc_error;
+               goto free_tx_skbuff;
        }
 
        tiptr = qe_muram_alloc(32, 32);
        if (tiptr < 0) {
                dev_err(priv->dev, "Cannot allocate MURAM mem for Transmit internal temp data pointer\n");
                ret = -ENOMEM;
-               goto tiptr_alloc_error;
+               goto free_riptr;
        }
 
        /* Set RIPTR, TIPTR */
@@ -247,7 +247,7 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
        if (!bd_buffer) {
                dev_err(priv->dev, "Could not allocate buffer descriptors\n");
                ret = -ENOMEM;
-               goto bd_alloc_error;
+               goto free_tiptr;
        }
 
        memset(bd_buffer, 0, (RX_BD_RING_LEN + TX_BD_RING_LEN)
@@ -283,25 +283,25 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
 
        return 0;
 
-bd_alloc_error:
+free_tiptr:
        qe_muram_free(tiptr);
-tiptr_alloc_error:
+free_riptr:
        qe_muram_free(riptr);
-riptr_alloc_error:
+free_tx_skbuff:
        kfree(priv->tx_skbuff);
-tx_skb_alloc_error:
+free_rx_skbuff:
        kfree(priv->rx_skbuff);
-rx_skb_alloc_error:
+free_ucc_pram:
        qe_muram_free(priv->ucc_pram_offset);
-pram_alloc_error:
+free_tx_bd:
        dma_free_coherent(priv->dev,
                          TX_BD_RING_LEN * sizeof(struct qe_bd),
                          priv->tx_bd_base, priv->dma_tx_bd);
-txbd_alloc_error:
+free_rx_bd:
        dma_free_coherent(priv->dev,
                          RX_BD_RING_LEN * sizeof(struct qe_bd),
                          priv->rx_bd_base, priv->dma_rx_bd);
-rxbd_alloc_error:
+free_uccf:
        ucc_fast_free(priv->uccf);
 
        return ret;
@@ -635,9 +635,8 @@ static int uhdlc_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
                        ifr->ifr_settings.size = size; /* data size wanted */
                        return -ENOBUFS;
                }
+               memset(&line, 0, sizeof(line));
                line.clock_type = priv->clocking;
-               line.clock_rate = 0;
-               line.loopback = 0;
 
                if (copy_to_user(ifr->ifr_settings.ifs_ifsu.sync, &line, size))
                        return -EFAULT;
@@ -1068,9 +1067,7 @@ static int ucc_hdlc_probe(struct platform_device *pdev)
 
        uhdlc_priv = kzalloc(sizeof(*uhdlc_priv), GFP_KERNEL);
        if (!uhdlc_priv) {
-               ret = -ENOMEM;
-               dev_err(&pdev->dev, "No mem to alloc hdlc private data\n");
-               goto err_alloc_priv;
+               return -ENOMEM;
        }
 
        dev_set_drvdata(&pdev->dev, uhdlc_priv);
@@ -1088,25 +1085,25 @@ static int ucc_hdlc_probe(struct platform_device *pdev)
                if (!utdm) {
                        ret = -ENOMEM;
                        dev_err(&pdev->dev, "No mem to alloc ucc tdm data\n");
-                       goto err_alloc_utdm;
+                       goto free_uhdlc_priv;
                }
                uhdlc_priv->utdm = utdm;
                ret = ucc_of_parse_tdm(np, utdm, ut_info);
                if (ret)
-                       goto err_miss_tsa_property;
+                       goto free_utdm;
        }
 
        ret = uhdlc_init(uhdlc_priv);
        if (ret) {
                dev_err(&pdev->dev, "Failed to init uhdlc\n");
-               goto err_hdlc_init;
+               goto free_utdm;
        }
 
        dev = alloc_hdlcdev(uhdlc_priv);
        if (!dev) {
                ret = -ENOMEM;
                pr_err("ucc_hdlc: unable to allocate memory\n");
-               goto err_hdlc_init;
+               goto undo_uhdlc_init;
        }
 
        uhdlc_priv->ndev = dev;
@@ -1120,19 +1117,19 @@ static int ucc_hdlc_probe(struct platform_device *pdev)
                ret = -ENOBUFS;
                pr_err("ucc_hdlc: unable to register hdlc device\n");
                free_netdev(dev);
-               goto err_hdlc_init;
+               goto free_dev;
        }
 
        return 0;
 
-err_hdlc_init:
-err_miss_tsa_property:
-       kfree(uhdlc_priv);
+free_dev:
+       free_netdev(dev);
+undo_uhdlc_init:
+free_utdm:
        if (uhdlc_priv->tsa)
                kfree(utdm);
-err_alloc_utdm:
+free_uhdlc_priv:
        kfree(uhdlc_priv);
-err_alloc_priv:
        return ret;
 }
 
@@ -1171,22 +1168,10 @@ static struct platform_driver ucc_hdlc_driver = {
        .probe  = ucc_hdlc_probe,
        .remove = ucc_hdlc_remove,
        .driver = {
-               .owner          = THIS_MODULE,
                .name           = DRV_NAME,
                .pm             = HDLC_PM_OPS,
                .of_match_table = fsl_ucc_hdlc_of_match,
        },
 };
 
-static int __init ucc_hdlc_init(void)
-{
-       return platform_driver_register(&ucc_hdlc_driver);
-}
-
-static void __exit ucc_hdlc_exit(void)
-{
-       platform_driver_unregister(&ucc_hdlc_driver);
-}
-
-module_init(ucc_hdlc_init);
-module_exit(ucc_hdlc_exit);
+module_platform_driver(ucc_hdlc_driver);
index afe2b20..b8f7dec 100644 (file)
@@ -789,12 +789,48 @@ s32 brcmf_notify_escan_complete(struct brcmf_cfg80211_info *cfg,
        return err;
 }
 
+static int brcmf_cfg80211_del_ap_iface(struct wiphy *wiphy,
+                                      struct wireless_dev *wdev)
+{
+       struct brcmf_cfg80211_info *cfg = wiphy_priv(wiphy);
+       struct net_device *ndev = wdev->netdev;
+       struct brcmf_if *ifp = netdev_priv(ndev);
+       int ret;
+       int err;
+
+       brcmf_cfg80211_arm_vif_event(cfg, ifp->vif);
+
+       err = brcmf_fil_bsscfg_data_set(ifp, "interface_remove", NULL, 0);
+       if (err) {
+               brcmf_err("interface_remove failed %d\n", err);
+               goto err_unarm;
+       }
+
+       /* wait for firmware event */
+       ret = brcmf_cfg80211_wait_vif_event(cfg, BRCMF_E_IF_DEL,
+                                           BRCMF_VIF_EVENT_TIMEOUT);
+       if (!ret) {
+               brcmf_err("timeout occurred\n");
+               err = -EIO;
+               goto err_unarm;
+       }
+
+       brcmf_remove_interface(ifp, true);
+
+err_unarm:
+       brcmf_cfg80211_arm_vif_event(cfg, NULL);
+       return err;
+}
+
 static
 int brcmf_cfg80211_del_iface(struct wiphy *wiphy, struct wireless_dev *wdev)
 {
        struct brcmf_cfg80211_info *cfg = wiphy_priv(wiphy);
        struct net_device *ndev = wdev->netdev;
 
+       if (ndev && ndev == cfg_to_ndev(cfg))
+               return -ENOTSUPP;
+
        /* vif event pending in firmware */
        if (brcmf_cfg80211_vif_event_armed(cfg))
                return -EBUSY;
@@ -811,12 +847,13 @@ int brcmf_cfg80211_del_iface(struct wiphy *wiphy, struct wireless_dev *wdev)
        switch (wdev->iftype) {
        case NL80211_IFTYPE_ADHOC:
        case NL80211_IFTYPE_STATION:
-       case NL80211_IFTYPE_AP:
        case NL80211_IFTYPE_AP_VLAN:
        case NL80211_IFTYPE_WDS:
        case NL80211_IFTYPE_MONITOR:
        case NL80211_IFTYPE_MESH_POINT:
                return -EOPNOTSUPP;
+       case NL80211_IFTYPE_AP:
+               return brcmf_cfg80211_del_ap_iface(wiphy, wdev);
        case NL80211_IFTYPE_P2P_CLIENT:
        case NL80211_IFTYPE_P2P_GO:
        case NL80211_IFTYPE_P2P_DEVICE:
@@ -6288,29 +6325,15 @@ static int brcmf_setup_ifmodes(struct wiphy *wiphy, struct brcmf_if *ifp)
        if (!combo)
                goto err;
 
-       c0_limits = kcalloc(p2p ? 3 : 2, sizeof(*c0_limits), GFP_KERNEL);
-       if (!c0_limits)
-               goto err;
-
-       if (p2p) {
-               p2p_limits = kcalloc(4, sizeof(*p2p_limits), GFP_KERNEL);
-               if (!p2p_limits)
-                       goto err;
-       }
-
-       if (mbss) {
-               mbss_limits = kcalloc(1, sizeof(*mbss_limits), GFP_KERNEL);
-               if (!mbss_limits)
-                       goto err;
-       }
-
        wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
                                 BIT(NL80211_IFTYPE_ADHOC) |
                                 BIT(NL80211_IFTYPE_AP);
 
        c = 0;
        i = 0;
-       combo[c].num_different_channels = 1;
+       c0_limits = kcalloc(p2p ? 3 : 2, sizeof(*c0_limits), GFP_KERNEL);
+       if (!c0_limits)
+               goto err;
        c0_limits[i].max = 1;
        c0_limits[i++].types = BIT(NL80211_IFTYPE_STATION);
        if (p2p) {
@@ -6328,6 +6351,7 @@ static int brcmf_setup_ifmodes(struct wiphy *wiphy, struct brcmf_if *ifp)
                c0_limits[i].max = 1;
                c0_limits[i++].types = BIT(NL80211_IFTYPE_AP);
        }
+       combo[c].num_different_channels = 1;
        combo[c].max_interfaces = i;
        combo[c].n_limits = i;
        combo[c].limits = c0_limits;
@@ -6335,7 +6359,9 @@ static int brcmf_setup_ifmodes(struct wiphy *wiphy, struct brcmf_if *ifp)
        if (p2p) {
                c++;
                i = 0;
-               combo[c].num_different_channels = 1;
+               p2p_limits = kcalloc(4, sizeof(*p2p_limits), GFP_KERNEL);
+               if (!p2p_limits)
+                       goto err;
                p2p_limits[i].max = 1;
                p2p_limits[i++].types = BIT(NL80211_IFTYPE_STATION);
                p2p_limits[i].max = 1;
@@ -6344,6 +6370,7 @@ static int brcmf_setup_ifmodes(struct wiphy *wiphy, struct brcmf_if *ifp)
                p2p_limits[i++].types = BIT(NL80211_IFTYPE_P2P_CLIENT);
                p2p_limits[i].max = 1;
                p2p_limits[i++].types = BIT(NL80211_IFTYPE_P2P_DEVICE);
+               combo[c].num_different_channels = 1;
                combo[c].max_interfaces = i;
                combo[c].n_limits = i;
                combo[c].limits = p2p_limits;
@@ -6351,14 +6378,19 @@ static int brcmf_setup_ifmodes(struct wiphy *wiphy, struct brcmf_if *ifp)
 
        if (mbss) {
                c++;
+               i = 0;
+               mbss_limits = kcalloc(1, sizeof(*mbss_limits), GFP_KERNEL);
+               if (!mbss_limits)
+                       goto err;
+               mbss_limits[i].max = 4;
+               mbss_limits[i++].types = BIT(NL80211_IFTYPE_AP);
                combo[c].beacon_int_infra_match = true;
                combo[c].num_different_channels = 1;
-               mbss_limits[0].max = 4;
-               mbss_limits[0].types = BIT(NL80211_IFTYPE_AP);
                combo[c].max_interfaces = 4;
-               combo[c].n_limits = 1;
+               combo[c].n_limits = i;
                combo[c].limits = mbss_limits;
        }
+
        wiphy->n_iface_combinations = n_combos;
        wiphy->iface_combinations = combo;
        return 0;
index 04bfc7e..7d77f86 100644 (file)
@@ -20,6 +20,7 @@
 /* for brcmu_d11inf */
 #include <brcmu_d11.h>
 
+#include "core.h"
 #include "fwil_types.h"
 #include "p2p.h"
 
index faf4e46..8d16f02 100644 (file)
@@ -548,12 +548,16 @@ fail:
        return -EBADE;
 }
 
-static void brcmf_net_detach(struct net_device *ndev)
+static void brcmf_net_detach(struct net_device *ndev, bool rtnl_locked)
 {
-       if (ndev->reg_state == NETREG_REGISTERED)
-               unregister_netdev(ndev);
-       else
+       if (ndev->reg_state == NETREG_REGISTERED) {
+               if (rtnl_locked)
+                       unregister_netdevice(ndev);
+               else
+                       unregister_netdev(ndev);
+       } else {
                brcmf_cfg80211_free_netdev(ndev);
+       }
 }
 
 void brcmf_net_setcarrier(struct brcmf_if *ifp, bool on)
@@ -634,7 +638,7 @@ fail:
 }
 
 struct brcmf_if *brcmf_add_if(struct brcmf_pub *drvr, s32 bsscfgidx, s32 ifidx,
-                             bool is_p2pdev, char *name, u8 *mac_addr)
+                             bool is_p2pdev, const char *name, u8 *mac_addr)
 {
        struct brcmf_if *ifp;
        struct net_device *ndev;
@@ -651,7 +655,7 @@ struct brcmf_if *brcmf_add_if(struct brcmf_pub *drvr, s32 bsscfgidx, s32 ifidx,
                        brcmf_err("ERROR: netdev:%s already exists\n",
                                  ifp->ndev->name);
                        netif_stop_queue(ifp->ndev);
-                       brcmf_net_detach(ifp->ndev);
+                       brcmf_net_detach(ifp->ndev, false);
                        drvr->iflist[bsscfgidx] = NULL;
                } else {
                        brcmf_dbg(INFO, "netdev:%s ignore IF event\n",
@@ -699,7 +703,8 @@ struct brcmf_if *brcmf_add_if(struct brcmf_pub *drvr, s32 bsscfgidx, s32 ifidx,
        return ifp;
 }
 
-static void brcmf_del_if(struct brcmf_pub *drvr, s32 bsscfgidx)
+static void brcmf_del_if(struct brcmf_pub *drvr, s32 bsscfgidx,
+                        bool rtnl_locked)
 {
        struct brcmf_if *ifp;
 
@@ -729,7 +734,7 @@ static void brcmf_del_if(struct brcmf_pub *drvr, s32 bsscfgidx)
                        cancel_work_sync(&ifp->multicast_work);
                        cancel_work_sync(&ifp->ndoffload_work);
                }
-               brcmf_net_detach(ifp->ndev);
+               brcmf_net_detach(ifp->ndev, rtnl_locked);
        } else {
                /* Only p2p device interfaces which get dynamically created
                 * end up here. In this case the p2p module should be informed
@@ -743,14 +748,14 @@ static void brcmf_del_if(struct brcmf_pub *drvr, s32 bsscfgidx)
        }
 }
 
-void brcmf_remove_interface(struct brcmf_if *ifp)
+void brcmf_remove_interface(struct brcmf_if *ifp, bool rtnl_locked)
 {
        if (!ifp || WARN_ON(ifp->drvr->iflist[ifp->bsscfgidx] != ifp))
                return;
        brcmf_dbg(TRACE, "Enter, bsscfgidx=%d, ifidx=%d\n", ifp->bsscfgidx,
                  ifp->ifidx);
        brcmf_fws_del_interface(ifp);
-       brcmf_del_if(ifp->drvr, ifp->bsscfgidx);
+       brcmf_del_if(ifp->drvr, ifp->bsscfgidx, rtnl_locked);
 }
 
 #ifdef CONFIG_INET
@@ -1057,9 +1062,9 @@ fail:
                brcmf_fws_deinit(drvr);
        }
        if (ifp)
-               brcmf_net_detach(ifp->ndev);
+               brcmf_net_detach(ifp->ndev, false);
        if (p2p_ifp)
-               brcmf_net_detach(p2p_ifp->ndev);
+               brcmf_net_detach(p2p_ifp->ndev, false);
        drvr->iflist[0] = NULL;
        drvr->iflist[1] = NULL;
        if (drvr->settings->ignore_probe_fail)
@@ -1128,7 +1133,7 @@ void brcmf_detach(struct device *dev)
 
        /* make sure primary interface removed last */
        for (i = BRCMF_MAX_IFS-1; i > -1; i--)
-               brcmf_remove_interface(drvr->iflist[i]);
+               brcmf_remove_interface(drvr->iflist[i], false);
 
        brcmf_cfg80211_detach(drvr->config);
 
index 2a075c5..8fa34ca 100644 (file)
@@ -215,8 +215,8 @@ char *brcmf_ifname(struct brcmf_if *ifp);
 struct brcmf_if *brcmf_get_ifp(struct brcmf_pub *drvr, int ifidx);
 int brcmf_net_attach(struct brcmf_if *ifp, bool rtnl_locked);
 struct brcmf_if *brcmf_add_if(struct brcmf_pub *drvr, s32 bsscfgidx, s32 ifidx,
-                             bool is_p2pdev, char *name, u8 *mac_addr);
-void brcmf_remove_interface(struct brcmf_if *ifp);
+                             bool is_p2pdev, const char *name, u8 *mac_addr);
+void brcmf_remove_interface(struct brcmf_if *ifp, bool rtnl_locked);
 void brcmf_txflowblock_if(struct brcmf_if *ifp,
                          enum brcmf_netif_stop_reason reason, bool state);
 void brcmf_txfinalize(struct brcmf_if *ifp, struct sk_buff *txp, bool success);
index b390561..79c081f 100644 (file)
@@ -18,6 +18,7 @@
 #include "brcmu_wifi.h"
 #include "brcmu_utils.h"
 
+#include "cfg80211.h"
 #include "core.h"
 #include "debug.h"
 #include "tracepoint.h"
@@ -182,8 +183,13 @@ static void brcmf_fweh_handle_if_event(struct brcmf_pub *drvr,
 
        err = brcmf_fweh_call_event_handler(ifp, emsg->event_code, emsg, data);
 
-       if (ifp && ifevent->action == BRCMF_E_IF_DEL)
-               brcmf_remove_interface(ifp);
+       if (ifp && ifevent->action == BRCMF_E_IF_DEL) {
+               bool armed = brcmf_cfg80211_vif_event_armed(drvr->config);
+
+               /* Default handling in case no-one waits for this event */
+               if (!armed)
+                       brcmf_remove_interface(ifp, false);
+       }
 }
 
 /**
index f38a821..66f942f 100644 (file)
@@ -2261,6 +2261,8 @@ int brcmf_p2p_del_vif(struct wiphy *wiphy, struct wireless_dev *wdev)
                        return 0;
                brcmf_p2p_cancel_remain_on_channel(vif->ifp);
                brcmf_p2p_deinit_discovery(p2p);
+               break;
+
        default:
                return -ENOTSUPP;
        }
@@ -2286,8 +2288,7 @@ int brcmf_p2p_del_vif(struct wiphy *wiphy, struct wireless_dev *wdev)
                else
                        err = 0;
        }
-       if (err)
-               brcmf_remove_interface(vif->ifp);
+       brcmf_remove_interface(vif->ifp, true);
 
        brcmf_cfg80211_arm_vif_event(cfg, NULL);
        if (vif->wdev.iftype != NL80211_IFTYPE_P2P_DEVICE)
@@ -2393,7 +2394,7 @@ void brcmf_p2p_detach(struct brcmf_p2p_info *p2p)
        if (vif != NULL) {
                brcmf_p2p_cancel_remain_on_channel(vif->ifp);
                brcmf_p2p_deinit_discovery(p2p);
-               brcmf_remove_interface(vif->ifp);
+               brcmf_remove_interface(vif->ifp, false);
        }
        /* just set it all to zero */
        memset(p2p, 0, sizeof(*p2p));
index e16ee60..c2a938b 100644 (file)
@@ -3349,8 +3349,8 @@ static void brcms_b_coreinit(struct brcms_c_info *wlc)
        dma_rxfill(wlc_hw->di[RX_FIFO]);
 }
 
-void
-static brcms_b_init(struct brcms_hardware *wlc_hw, u16 chanspec) {
+static void brcms_b_init(struct brcms_hardware *wlc_hw, u16 chanspec)
+{
        u32 macintmask;
        bool fastclk;
        struct brcms_c_info *wlc = wlc_hw->wlc;
index ca3cd21..69b826d 100644 (file)
@@ -1102,8 +1102,8 @@ static const char version[] = "airo.c 0.6 (Ben Reed & Javier Achirica)";
 struct airo_info;
 
 static int get_dec_u16( char *buffer, int *start, int limit );
-static void OUT4500( struct airo_info *, u16 register, u16 value );
-static unsigned short IN4500( struct airo_info *, u16 register );
+static void OUT4500( struct airo_info *, u16 reg, u16 value );
+static unsigned short IN4500( struct airo_info *, u16 reg );
 static u16 setup_card(struct airo_info*, u8 *mac, int lock);
 static int enable_MAC(struct airo_info *ai, int lock);
 static void disable_MAC(struct airo_info *ai, int lock);
index 5adb7ce..bfd6861 100644 (file)
@@ -4093,7 +4093,7 @@ static const char *ipw_get_status_code(u16 status)
        return "Unknown status value.";
 }
 
-static void inline average_init(struct average *avg)
+static inline void average_init(struct average *avg)
 {
        memset(avg, 0, sizeof(*avg));
 }
index 05828c6..6e7ed90 100644 (file)
@@ -8,7 +8,7 @@ iwlwifi-objs            += iwl-eeprom-read.o iwl-eeprom-parse.o
 iwlwifi-objs           += iwl-phy-db.o iwl-nvm-parse.o
 iwlwifi-objs           += pcie/drv.o pcie/rx.o pcie/tx.o pcie/trans.o
 iwlwifi-$(CONFIG_IWLDVM) += iwl-1000.o iwl-2000.o iwl-5000.o iwl-6000.o
-iwlwifi-$(CONFIG_IWLMVM) += iwl-7000.o iwl-8000.o iwl-9000.o
+iwlwifi-$(CONFIG_IWLMVM) += iwl-7000.o iwl-8000.o iwl-9000.o iwl-a000.o
 iwlwifi-objs           += iwl-trans.o
 
 iwlwifi-objs += $(iwlwifi-m)
index 8dda52a..6c2d6da 100644 (file)
@@ -205,23 +205,6 @@ static const __le32 iwlagn_def_3w_lookup[IWLAGN_BT_DECISION_LUT_SIZE] = {
        cpu_to_le32(0xf0005000),
 };
 
-
-/* Loose Coex */
-static const __le32 iwlagn_loose_lookup[IWLAGN_BT_DECISION_LUT_SIZE] = {
-       cpu_to_le32(0xaaaaaaaa),
-       cpu_to_le32(0xaaaaaaaa),
-       cpu_to_le32(0xaeaaaaaa),
-       cpu_to_le32(0xaaaaaaaa),
-       cpu_to_le32(0xcc00ff28),
-       cpu_to_le32(0x0000aaaa),
-       cpu_to_le32(0xcc00aaaa),
-       cpu_to_le32(0x0000aaaa),
-       cpu_to_le32(0x00000000),
-       cpu_to_le32(0x00000000),
-       cpu_to_le32(0xf0005000),
-       cpu_to_le32(0xf0005000),
-};
-
 /* Full concurrency */
 static const __le32 iwlagn_concurrent_lookup[IWLAGN_BT_DECISION_LUT_SIZE] = {
        cpu_to_le32(0xaaaaaaaa),
index 37b32a6..b498486 100644 (file)
@@ -1317,6 +1317,7 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
        trans_cfg.n_no_reclaim_cmds = ARRAY_SIZE(no_reclaim_cmds);
 
        switch (iwlwifi_mod_params.amsdu_size) {
+       case IWL_AMSDU_DEF:
        case IWL_AMSDU_4K:
                trans_cfg.rx_buf_size = IWL_AMSDU_4K;
                break;
@@ -1336,6 +1337,8 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
        trans_cfg.command_groups_size = ARRAY_SIZE(iwl_dvm_groups);
 
        trans_cfg.cmd_fifo = IWLAGN_CMD_FIFO_NUM;
+       trans_cfg.cb_data_offs = offsetof(struct ieee80211_tx_info,
+                                         driver_data[2]);
 
        WARN_ON(sizeof(priv->transport_queue_stop) * BITS_PER_BYTE <
                priv->cfg->base_params->num_of_queues);
index b228552..087e579 100644 (file)
@@ -523,11 +523,6 @@ static int iwlagn_rxon_connect(struct iwl_priv *priv,
                return ret;
        }
 
-       if (ctx->vif && ctx->vif->type == NL80211_IFTYPE_STATION &&
-           priv->cfg->ht_params && priv->cfg->ht_params->smps_mode)
-               ieee80211_request_smps(ctx->vif,
-                                      priv->cfg->ht_params->smps_mode);
-
        return 0;
 }
 
index 3ac298f..5c1e71f 100644 (file)
@@ -178,6 +178,7 @@ const struct iwl_cfg iwl5165_2ac_cfg = {
                .nvm_ver = IWL9000_NVM_VERSION,
                .nvm_calib_ver = IWL9000_TX_POWER_VERSION,
                .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K,
+               .integrated = true,
 };
 
 MODULE_FIRMWARE(IWL9000_MODULE_FIRMWARE(IWL9000_UCODE_API_MAX));
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-a000.c b/drivers/net/wireless/intel/iwlwifi/iwl-a000.c
new file mode 100644 (file)
index 0000000..4d78232
--- /dev/null
@@ -0,0 +1,131 @@
+/******************************************************************************
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2015-2016 Intel Deutschland GmbH
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015-2016 Intel Deutschland GmbH
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *  * Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ *  * Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *  * Neither the name Intel Corporation nor the names of its
+ *    contributors may be used to endorse or promote products derived
+ *    from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *****************************************************************************/
+
+#include <linux/module.h>
+#include <linux/stringify.h>
+#include "iwl-config.h"
+#include "iwl-agn-hw.h"
+
+/* Highest firmware API version supported */
+#define IWL_A000_UCODE_API_MAX 24
+
+/* Lowest firmware API version supported */
+#define IWL_A000_UCODE_API_MIN 24
+
+/* NVM versions */
+#define IWL_A000_NVM_VERSION           0x0a1d
+#define IWL_A000_TX_POWER_VERSION      0xffff /* meaningless */
+
+/* Memory offsets and lengths */
+#define IWL_A000_DCCM_OFFSET           0x800000
+#define IWL_A000_DCCM_LEN              0x18000
+#define IWL_A000_DCCM2_OFFSET          0x880000
+#define IWL_A000_DCCM2_LEN             0x8000
+#define IWL_A000_SMEM_OFFSET           0x400000
+#define IWL_A000_SMEM_LEN              0x68000
+
+#define IWL_A000_FW_PRE "iwlwifi-Qu-a0-jf-b0-"
+#define IWL_A000_MODULE_FIRMWARE(api) \
+       IWL_A000_FW_PRE "-" __stringify(api) ".ucode"
+
+#define NVM_HW_SECTION_NUM_FAMILY_A000         10
+
+static const struct iwl_base_params iwl_a000_base_params = {
+       .eeprom_size = OTP_LOW_IMAGE_SIZE_FAMILY_A000,
+       .num_of_queues = 31,
+       .shadow_ram_support = true,
+       .led_compensation = 57,
+       .wd_timeout = IWL_LONG_WD_TIMEOUT,
+       .max_event_log_size = 512,
+       .shadow_reg_enable = true,
+       .pcie_l1_allowed = true,
+};
+
+static const struct iwl_ht_params iwl_a000_ht_params = {
+       .stbc = true,
+       .ldpc = true,
+       .ht40_bands = BIT(NL80211_BAND_2GHZ) | BIT(NL80211_BAND_5GHZ),
+};
+
+#define IWL_DEVICE_A000                                                        \
+       .ucode_api_max = IWL_A000_UCODE_API_MAX,                        \
+       .ucode_api_min = IWL_A000_UCODE_API_MIN,                        \
+       .device_family = IWL_DEVICE_FAMILY_8000,                        \
+       .max_inst_size = IWL60_RTC_INST_SIZE,                           \
+       .max_data_size = IWL60_RTC_DATA_SIZE,                           \
+       .base_params = &iwl_a000_base_params,                           \
+       .led_mode = IWL_LED_RF_STATE,                                   \
+       .nvm_hw_section_num = NVM_HW_SECTION_NUM_FAMILY_A000,           \
+       .non_shared_ant = ANT_A,                                        \
+       .dccm_offset = IWL_A000_DCCM_OFFSET,                            \
+       .dccm_len = IWL_A000_DCCM_LEN,                                  \
+       .dccm2_offset = IWL_A000_DCCM2_OFFSET,                          \
+       .dccm2_len = IWL_A000_DCCM2_LEN,                                \
+       .smem_offset = IWL_A000_SMEM_OFFSET,                            \
+       .smem_len = IWL_A000_SMEM_LEN,                                  \
+       .features = IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM,           \
+       .apmg_not_supported = true,                                     \
+       .mq_rx_supported = true,                                        \
+       .vht_mu_mimo_supported = true,                                  \
+       .mac_addr_from_csr = true,                                      \
+       .use_tfh = true
+
+const struct iwl_cfg iwla000_2ac_cfg = {
+               .name = "Intel(R) Dual Band Wireless AC a000",
+               .fw_name_pre = IWL_A000_FW_PRE,
+               IWL_DEVICE_A000,
+               .ht_params = &iwl_a000_ht_params,
+               .nvm_ver = IWL_A000_NVM_VERSION,
+               .nvm_calib_ver = IWL_A000_TX_POWER_VERSION,
+               .max_ht_ampdu_exponent = IEEE80211_HT_MAX_AMPDU_64K,
+};
+
+MODULE_FIRMWARE(IWL_A000_MODULE_FIRMWARE(IWL_A000_UCODE_API_MAX));
index 4a0af7d..423b233 100644 (file)
@@ -66,8 +66,9 @@
 #define __IWL_CONFIG_H__
 
 #include <linux/types.h>
-#include <net/mac80211.h>
-
+#include <linux/netdevice.h>
+#include <linux/ieee80211.h>
+#include <linux/nl80211.h>
 
 enum iwl_device_family {
        IWL_DEVICE_FAMILY_UNDEFINED,
@@ -192,7 +193,6 @@ struct iwl_base_params {
  * @ht40_bands: bitmap of bands (using %NL80211_BAND_*) that support HT40
  */
 struct iwl_ht_params {
-       enum ieee80211_smps_mode smps_mode;
        u8 ht_greenfield_support:1,
           stbc:1,
           ldpc:1,
@@ -261,6 +261,7 @@ struct iwl_tt_params {
 #define OTP_LOW_IMAGE_SIZE_FAMILY_7000 (16 * 512 * sizeof(u16)) /* 16 KB */
 #define OTP_LOW_IMAGE_SIZE_FAMILY_8000 (32 * 512 * sizeof(u16)) /* 32 KB */
 #define OTP_LOW_IMAGE_SIZE_FAMILY_9000 OTP_LOW_IMAGE_SIZE_FAMILY_8000
+#define OTP_LOW_IMAGE_SIZE_FAMILY_A000 OTP_LOW_IMAGE_SIZE_FAMILY_9000
 
 struct iwl_eeprom_params {
        const u8 regulatory_bands[7];
@@ -319,6 +320,7 @@ struct iwl_pwr_tx_backoff {
  * @mq_rx_supported: multi-queue rx support
  * @vht_mu_mimo_supported: VHT MU-MIMO support
  * @rf_id: need to read rf_id to determine the firmware image
+ * @integrated: discrete or integrated
  *
  * We enable the driver to be backward compatible wrt. hardware features.
  * API differences in uCode shouldn't be handled here but through TLVs
@@ -362,7 +364,9 @@ struct iwl_cfg {
            apmg_not_supported:1,
            mq_rx_supported:1,
            vht_mu_mimo_supported:1,
-           rf_id:1;
+           rf_id:1,
+           integrated:1,
+           use_tfh:1;
        u8 valid_tx_ant;
        u8 valid_rx_ant;
        u8 non_shared_ant;
@@ -448,6 +452,7 @@ extern const struct iwl_cfg iwl4165_2ac_sdio_cfg;
 extern const struct iwl_cfg iwl9260_2ac_cfg;
 extern const struct iwl_cfg iwl9260lc_2ac_cfg;
 extern const struct iwl_cfg iwl5165_2ac_cfg;
+extern const struct iwl_cfg iwla000_2ac_cfg;
 #endif /* CONFIG_IWLMVM */
 
 #endif /* __IWL_CONFIG_H__ */
index b529134..871ad02 100644 (file)
 
 #define CSR_LED_REG             (CSR_BASE+0x094)
 #define CSR_DRAM_INT_TBL_REG   (CSR_BASE+0x0A0)
-#define CSR_MAC_SHADOW_REG_CTRL        (CSR_BASE+0x0A8) /* 6000 and up */
-
+#define CSR_MAC_SHADOW_REG_CTRL                (CSR_BASE + 0x0A8) /* 6000 and up */
+#define CSR_MAC_SHADOW_REG_CTRL_RX_WAKE        BIT(20)
+#define CSR_MAC_SHADOW_REG_CTL2                (CSR_BASE + 0x0AC)
+#define CSR_MAC_SHADOW_REG_CTL2_RX_WAKE        0xFFFF
 
 /* GIO Chicken Bits (PCI Express bus link power management) */
 #define CSR_GIO_CHICKEN_BITS    (CSR_BASE+0x100)
index 1103332..cd77c69 100644 (file)
@@ -41,6 +41,7 @@ static inline bool iwl_have_debug_level(u32 level)
 #endif
 }
 
+struct device;
 void __iwl_err(struct device *dev, bool rfkill_prefix, bool only_trace,
                const char *fmt, ...) __printf(4, 5);
 void __iwl_warn(struct device *dev, const char *fmt, ...) __printf(2, 3);
index 27914ee..1dccae6 100644 (file)
@@ -1,6 +1,7 @@
 /******************************************************************************
  *
  * Copyright(c) 2009 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2016 Intel Deutschland GmbH
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of version 2 of the GNU General Public License as
@@ -83,6 +84,23 @@ TRACE_EVENT(iwlwifi_dev_iowrite32,
                  __get_str(dev), __entry->offs, __entry->val)
 );
 
+TRACE_EVENT(iwlwifi_dev_iowrite64,
+       TP_PROTO(const struct device *dev, u64 offs, u64 val),
+       TP_ARGS(dev, offs, val),
+       TP_STRUCT__entry(
+               DEV_ENTRY
+               __field(u64, offs)
+               __field(u64, val)
+       ),
+       TP_fast_assign(
+               DEV_ASSIGN;
+               __entry->offs = offs;
+               __entry->val = val;
+       ),
+       TP_printk("[%s] write io[%llu] = %llu)",
+                 __get_str(dev), __entry->offs, __entry->val)
+);
+
 TRACE_EVENT(iwlwifi_dev_iowrite_prph32,
        TP_PROTO(const struct device *dev, u32 offs, u32 val),
        TP_ARGS(dev, offs, val),
@@ -100,6 +118,23 @@ TRACE_EVENT(iwlwifi_dev_iowrite_prph32,
                  __get_str(dev), __entry->offs, __entry->val)
 );
 
+TRACE_EVENT(iwlwifi_dev_iowrite_prph64,
+       TP_PROTO(const struct device *dev, u64 offs, u64 val),
+       TP_ARGS(dev, offs, val),
+       TP_STRUCT__entry(
+               DEV_ENTRY
+               __field(u64, offs)
+               __field(u64, val)
+       ),
+       TP_fast_assign(
+               DEV_ASSIGN;
+               __entry->offs = offs;
+               __entry->val = val;
+       ),
+       TP_printk("[%s] write PRPH[%llu] = %llu)",
+                 __get_str(dev), __entry->offs, __entry->val)
+);
+
 TRACE_EVENT(iwlwifi_dev_ioread_prph32,
        TP_PROTO(const struct device *dev, u32 offs, u32 val),
        TP_ARGS(dev, offs, val),
index f4d3cd0..545d14b 100644 (file)
@@ -1,6 +1,7 @@
 /******************************************************************************
  *
  * Copyright(c) 2009 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(C) 2016 Intel Deutschland GmbH
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms of version 2 of the GNU General Public License as
 static inline bool iwl_trace_data(struct sk_buff *skb)
 {
        struct ieee80211_hdr *hdr = (void *)skb->data;
-       struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+       __le16 fc = hdr->frame_control;
+       int offs = 24; /* start with normal header length */
 
-       if (!ieee80211_is_data(hdr->frame_control))
+       if (!ieee80211_is_data(fc))
                return false;
-       return !(info->control.flags & IEEE80211_TX_CTRL_PORT_CTRL_PROTO);
+
+       /* Try to determine if the frame is EAPOL. This might have false
+        * positives (if there's no RFC 1042 header and we compare to some
+        * payload instead) but since we're only doing tracing that's not
+        * a problem.
+        */
+
+       if (ieee80211_has_a4(fc))
+               offs += 6;
+       if (ieee80211_is_data_qos(fc))
+               offs += 2;
+       /* don't account for crypto - these are unencrypted */
+
+       /* also account for the RFC 1042 header, of course */
+       offs += 6;
+
+       return skb->len > offs + 2 &&
+              *(__be16 *)(skb->data + offs) == cpu_to_be16(ETH_P_PAE);
 }
 
 static inline size_t iwl_rx_trace_len(const struct iwl_trans *trans,
index f52ff75..45b2f67 100644 (file)
@@ -129,8 +129,8 @@ struct iwl_drv {
 };
 
 enum {
-       DVM_OP_MODE =   0,
-       MVM_OP_MODE =   1,
+       DVM_OP_MODE,
+       MVM_OP_MODE,
 };
 
 /* Protects the table contents, i.e. the ops pointer & drv list */
@@ -326,8 +326,6 @@ static int iwl_store_cscheme(struct iwl_fw *fw, const u8 *data, const u32 len)
        int i, j;
        struct iwl_fw_cscheme_list *l = (struct iwl_fw_cscheme_list *)data;
        struct iwl_fw_cipher_scheme *fwcs;
-       struct ieee80211_cipher_scheme *cs;
-       u32 cipher;
 
        if (len < sizeof(*l) ||
            len < sizeof(l->size) + l->size * sizeof(l->cs[0]))
@@ -335,22 +333,12 @@ static int iwl_store_cscheme(struct iwl_fw *fw, const u8 *data, const u32 len)
 
        for (i = 0, j = 0; i < IWL_UCODE_MAX_CS && i < l->size; i++) {
                fwcs = &l->cs[j];
-               cipher = le32_to_cpu(fwcs->cipher);
 
                /* we skip schemes with zero cipher suite selector */
-               if (!cipher)
+               if (!fwcs->cipher)
                        continue;
 
-               cs = &fw->cs[j++];
-               cs->cipher = cipher;
-               cs->iftype = BIT(NL80211_IFTYPE_STATION);
-               cs->hdr_len = fwcs->hdr_len;
-               cs->pn_len = fwcs->pn_len;
-               cs->pn_off = fwcs->pn_off;
-               cs->key_idx_off = fwcs->key_idx_off;
-               cs->key_idx_mask = fwcs->key_idx_mask;
-               cs->key_idx_shift = fwcs->key_idx_shift;
-               cs->mic_len = fwcs->mic_len;
+               fw->cs[j++] = *fwcs;
        }
 
        return 0;
@@ -795,17 +783,17 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv,
                 case IWL_UCODE_TLV_SEC_RT:
                        iwl_store_ucode_sec(pieces, tlv_data, IWL_UCODE_REGULAR,
                                            tlv_len);
-                       drv->fw.mvm_fw = true;
+                       drv->fw.type = IWL_FW_MVM;
                        break;
                case IWL_UCODE_TLV_SEC_INIT:
                        iwl_store_ucode_sec(pieces, tlv_data, IWL_UCODE_INIT,
                                            tlv_len);
-                       drv->fw.mvm_fw = true;
+                       drv->fw.type = IWL_FW_MVM;
                        break;
                case IWL_UCODE_TLV_SEC_WOWLAN:
                        iwl_store_ucode_sec(pieces, tlv_data, IWL_UCODE_WOWLAN,
                                            tlv_len);
-                       drv->fw.mvm_fw = true;
+                       drv->fw.type = IWL_FW_MVM;
                        break;
                case IWL_UCODE_TLV_DEF_CALIB:
                        if (tlv_len != sizeof(struct iwl_tlv_calib_data))
@@ -827,17 +815,17 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv,
                 case IWL_UCODE_TLV_SECURE_SEC_RT:
                        iwl_store_ucode_sec(pieces, tlv_data, IWL_UCODE_REGULAR,
                                            tlv_len);
-                       drv->fw.mvm_fw = true;
+                       drv->fw.type = IWL_FW_MVM;
                        break;
                case IWL_UCODE_TLV_SECURE_SEC_INIT:
                        iwl_store_ucode_sec(pieces, tlv_data, IWL_UCODE_INIT,
                                            tlv_len);
-                       drv->fw.mvm_fw = true;
+                       drv->fw.type = IWL_FW_MVM;
                        break;
                case IWL_UCODE_TLV_SECURE_SEC_WOWLAN:
                        iwl_store_ucode_sec(pieces, tlv_data, IWL_UCODE_WOWLAN,
                                            tlv_len);
-                       drv->fw.mvm_fw = true;
+                       drv->fw.type = IWL_FW_MVM;
                        break;
                case IWL_UCODE_TLV_NUM_OF_CPU:
                        if (tlv_len != sizeof(u32))
@@ -1275,7 +1263,7 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
         * In mvm uCode there is no difference between data and instructions
         * sections.
         */
-       if (!fw->mvm_fw && validate_sec_sizes(drv, pieces, drv->cfg))
+       if (fw->type == IWL_FW_DVM && validate_sec_sizes(drv, pieces, drv->cfg))
                goto try_again;
 
        /* Allocate ucode buffers for card's bus-master loading ... */
@@ -1403,10 +1391,16 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
        release_firmware(ucode_raw);
 
        mutex_lock(&iwlwifi_opmode_table_mtx);
-       if (fw->mvm_fw)
-               op = &iwlwifi_opmode_table[MVM_OP_MODE];
-       else
+       switch (fw->type) {
+       case IWL_FW_DVM:
                op = &iwlwifi_opmode_table[DVM_OP_MODE];
+               break;
+       default:
+               WARN(1, "Invalid fw type %d\n", fw->type);
+       case IWL_FW_MVM:
+               op = &iwlwifi_opmode_table[MVM_OP_MODE];
+               break;
+       }
 
        IWL_INFO(drv, "loaded firmware version %s op_mode %s\n",
                 drv->fw.fw_version, op->name);
@@ -1658,7 +1652,8 @@ MODULE_PARM_DESC(11n_disable,
        "disable 11n functionality, bitmap: 1: full, 2: disable agg TX, 4: disable agg RX, 8 enable agg TX");
 module_param_named(amsdu_size, iwlwifi_mod_params.amsdu_size,
                   int, S_IRUGO);
-MODULE_PARM_DESC(amsdu_size, "amsdu size 0:4K 1:8K 2:12K (default 0)");
+MODULE_PARM_DESC(amsdu_size,
+                "amsdu size 0: 12K for multi Rx queue devices, 4K for other devices 1:4K 2:8K 3:12K (default 0)");
 module_param_named(fw_restart, iwlwifi_mod_params.restart_fw, bool, S_IRUGO);
 MODULE_PARM_DESC(fw_restart, "restart firmware in case of error (default true)");
 
index bf1b69a..3199d34 100644 (file)
@@ -766,7 +766,9 @@ void iwl_init_ht_hw_capab(const struct iwl_cfg *cfg,
        if (cfg->ht_params->ldpc)
                ht_info->cap |= IEEE80211_HT_CAP_LDPC_CODING;
 
-       if (iwlwifi_mod_params.amsdu_size >= IWL_AMSDU_8K)
+       if ((cfg->mq_rx_supported &&
+            iwlwifi_mod_params.amsdu_size != IWL_AMSDU_4K) ||
+            iwlwifi_mod_params.amsdu_size >= IWL_AMSDU_8K)
                ht_info->cap |= IEEE80211_HT_CAP_MAX_AMSDU;
 
        ht_info->ampdu_factor = cfg->max_ht_ampdu_exponent;
index 1f4e502..e04a91d 100644 (file)
@@ -66,6 +66,7 @@
 
 #include <linux/types.h>
 #include <linux/if_ether.h>
+#include <net/cfg80211.h>
 #include "iwl-trans.h"
 
 struct iwl_nvm_data {
index 270f39e..1d6f5d2 100644 (file)
@@ -77,6 +77,7 @@
  */
 #define FH_MEM_LOWER_BOUND                   (0x1000)
 #define FH_MEM_UPPER_BOUND                   (0x2000)
+#define TFH_MEM_LOWER_BOUND                  (0xA06000)
 
 /**
  * Keep-Warm (KW) buffer base address.
 #define FH_MEM_CBBC_16_19_UPPER_BOUND          (FH_MEM_LOWER_BOUND + 0xC00)
 #define FH_MEM_CBBC_20_31_LOWER_BOUND          (FH_MEM_LOWER_BOUND + 0xB20)
 #define FH_MEM_CBBC_20_31_UPPER_BOUND          (FH_MEM_LOWER_BOUND + 0xB80)
+/* a000 TFD table address, 64 bit */
+#define TFH_TFDQ_CBB_TABLE                     (TFH_MEM_LOWER_BOUND + 0x1C00)
 
 /* Find TFD CB base pointer for given queue */
-static inline unsigned int FH_MEM_CBBC_QUEUE(unsigned int chnl)
+static inline unsigned int FH_MEM_CBBC_QUEUE(struct iwl_trans *trans,
+                                            unsigned int chnl)
 {
+       if (trans->cfg->use_tfh) {
+               WARN_ON_ONCE(chnl >= 64);
+               return TFH_TFDQ_CBB_TABLE + 8 * chnl;
+       }
        if (chnl < 16)
                return FH_MEM_CBBC_0_15_LOWER_BOUND + 4 * chnl;
        if (chnl < 20)
@@ -130,6 +138,65 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(unsigned int chnl)
        return FH_MEM_CBBC_20_31_LOWER_BOUND + 4 * (chnl - 20);
 }
 
+/* a000 configuration registers */
+
+/*
+ * TFH Configuration register.
+ *
+ * BIT fields:
+ *
+ * Bits 3:0:
+ * Define the maximum number of pending read requests.
+ * Maximum configration value allowed is 0xC
+ * Bits 9:8:
+ * Define the maximum transfer size. (64 / 128 / 256)
+ * Bit 10:
+ * When bit is set and transfer size is set to 128B, the TFH will enable
+ * reading chunks of more than 64B only if the read address is aligned to 128B.
+ * In case of DRAM read address which is not aligned to 128B, the TFH will
+ * enable transfer size which doesn't cross 64B DRAM address boundary.
+*/
+#define TFH_TRANSFER_MODE              (TFH_MEM_LOWER_BOUND + 0x1F40)
+#define TFH_TRANSFER_MAX_PENDING_REQ   0xc
+#define TFH_CHUNK_SIZE_128                     BIT(8)
+#define TFH_CHUNK_SPLIT_MODE           BIT(10)
+/*
+ * Defines the offset address in dwords referring from the beginning of the
+ * Tx CMD which will be updated in DRAM.
+ * Note that the TFH offset address for Tx CMD update is always referring to
+ * the start of the TFD first TB.
+ * In case of a DRAM Tx CMD update the TFH will update PN and Key ID
+ */
+#define TFH_TXCMD_UPDATE_CFG           (TFH_MEM_LOWER_BOUND + 0x1F48)
+/*
+ * Controls TX DMA operation
+ *
+ * BIT fields:
+ *
+ * Bits 31:30: Enable the SRAM DMA channel.
+ * Turning on bit 31 will kick the SRAM2DRAM DMA.
+ * Note that the sram2dram may be enabled only after configuring the DRAM and
+ * SRAM addresses registers and the byte count register.
+ * Bits 25:24: Defines the interrupt target upon dram2sram transfer done. When
+ * set to 1 - interrupt is sent to the driver
+ * Bit 0: Indicates the snoop configuration
+*/
+#define TFH_SRV_DMA_CHNL0_CTRL (TFH_MEM_LOWER_BOUND + 0x1F60)
+#define TFH_SRV_DMA_SNOOP      BIT(0)
+#define TFH_SRV_DMA_TO_DRIVER  BIT(24)
+#define TFH_SRV_DMA_START      BIT(31)
+
+/* Defines the DMA SRAM write start address to transfer a data block */
+#define TFH_SRV_DMA_CHNL0_SRAM_ADDR    (TFH_MEM_LOWER_BOUND + 0x1F64)
+
+/* Defines the 64bits DRAM start address to read the DMA data block from */
+#define TFH_SRV_DMA_CHNL0_DRAM_ADDR    (TFH_MEM_LOWER_BOUND + 0x1F68)
+
+/*
+ * Defines the number of bytes to transfer from DRAM to SRAM.
+ * Note that this register may be configured with non-dword aligned size.
+ */
+#define TFH_SRV_DMA_CHNL0_BC   (TFH_MEM_LOWER_BOUND + 0x1F70)
 
 /**
  * Rx SRAM Control and Status Registers (RSCSR)
@@ -344,6 +411,32 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(unsigned int chnl)
 #define RFH_RBDBUF_RBD0_LSB 0xA08300
 #define RFH_RBDBUF_RBD_LSB(q) (RFH_RBDBUF_RBD0_LSB + (q) * 8)
 
+/**
+ * RFH Status Register
+ *
+ * Bit fields:
+ *
+ * Bit 29: RBD_FETCH_IDLE
+ * This status flag is set by the RFH when there is no active RBD fetch from
+ * DRAM.
+ * Once the RFH RBD controller starts fetching (or when there is a pending
+ * RBD read response from DRAM), this flag is immediately turned off.
+ *
+ * Bit 30: SRAM_DMA_IDLE
+ * This status flag is set by the RFH when there is no active transaction from
+ * SRAM to DRAM.
+ * Once the SRAM to DRAM DMA is active, this flag is immediately turned off.
+ *
+ * Bit 31: RXF_DMA_IDLE
+ * This status flag is set by the RFH when there is no active transaction from
+ * RXF to DRAM.
+ * Once the RXF-to-DRAM DMA is active, this flag is immediately turned off.
+ */
+#define RFH_GEN_STATUS 0xA09808
+#define RBD_FETCH_IDLE BIT(29)
+#define SRAM_DMA_IDLE  BIT(30)
+#define RXF_DMA_IDLE   BIT(31)
+
 /* DMA configuration */
 #define RFH_RXF_DMA_CFG 0xA09820
 /* RB size */
@@ -384,7 +477,9 @@ static inline unsigned int FH_MEM_CBBC_QUEUE(unsigned int chnl)
 #define RFH_GEN_CFG    0xA09800
 #define RFH_GEN_CFG_SERVICE_DMA_SNOOP  BIT(0)
 #define RFH_GEN_CFG_RFH_DMA_SNOOP      BIT(1)
-#define RFH_GEN_CFG_RB_CHUNK_SIZE      BIT(4) /* 0 - 64B, 1- 128B */
+#define RFH_GEN_CFG_RB_CHUNK_SIZE_POS  4
+#define RFH_GEN_CFG_RB_CHUNK_SIZE_128  1
+#define RFH_GEN_CFG_RB_CHUNK_SIZE_64   0
 #define RFH_GEN_CFG_DEFAULT_RXQ_NUM_MASK 0xF00
 #define RFH_GEN_CFG_DEFAULT_RXQ_NUM_POS 8
 
index 09b7ea2..420c31d 100644 (file)
@@ -89,6 +89,9 @@
  * @IWL_FW_ERROR_PAGING: UMAC's image memory segments which were
  *     paged to the DRAM.
  * @IWL_FW_ERROR_DUMP_RADIO_REG: Dump the radio registers.
+ * @IWL_FW_ERROR_DUMP_EXTERNAL: used only by external code utilities, and
+ *     for that reason is not in use in any other place in the Linux Wi-Fi
+ *     stack.
  */
 enum iwl_fw_error_dump_type {
        /* 0 is deprecated */
@@ -106,6 +109,7 @@ enum iwl_fw_error_dump_type {
        IWL_FW_ERROR_DUMP_PAGING = 12,
        IWL_FW_ERROR_DUMP_RADIO_REG = 13,
        IWL_FW_ERROR_DUMP_INTERNAL_TXF = 14,
+       IWL_FW_ERROR_DUMP_EXTERNAL = 15, /* Do not move */
 
        IWL_FW_ERROR_DUMP_MAX,
 };
index 37dc09e..1b1e045 100644 (file)
@@ -301,7 +301,8 @@ typedef unsigned int __bitwise__ iwl_ucode_tlv_capa_t;
  * @IWL_UCODE_TLV_CAPA_DC2DC_SUPPORT: supports DC2DC Command
  * @IWL_UCODE_TLV_CAPA_CSUM_SUPPORT: supports TCP Checksum Offload
  * @IWL_UCODE_TLV_CAPA_RADIO_BEACON_STATS: support radio and beacon statistics
- * @IWL_UCODE_TLV_CAPA_P2P_STANDALONE_UAPSD: support p2p standalone U-APSD
+ * @IWL_UCODE_TLV_CAPA_P2P_SCM_UAPSD: supports U-APSD on p2p interface when it
+ *     is standalone or with a BSS station interface in the same binding.
  * @IWL_UCODE_TLV_CAPA_BT_COEX_PLCR: enabled BT Coex packet level co-running
  * @IWL_UCODE_TLV_CAPA_LAR_MULTI_MCC: ucode supports LAR updates with different
  *     sources for the MCC. This TLV bit is a future replacement to
@@ -312,6 +313,9 @@ typedef unsigned int __bitwise__ iwl_ucode_tlv_capa_t;
  * @IWL_UCODE_TLV_CAPA_EXTENDED_DTS_MEASURE: extended DTS measurement
  * @IWL_UCODE_TLV_CAPA_SHORT_PM_TIMEOUTS: supports short PM timeouts
  * @IWL_UCODE_TLV_CAPA_BT_MPLUT_SUPPORT: supports bt-coex Multi-priority LUT
+ * @IWL_UCODE_TLV_CAPA_CSA_AND_TBTT_OFFLOAD: the firmware supports CSA
+ *     countdown offloading. Beacon notifications are not sent to the host.
+ *     The fw also offloads TBTT alignment.
  * @IWL_UCODE_TLV_CAPA_BEACON_ANT_SELECTION: firmware will decide on what
  *     antenna the beacon should be transmitted
  * @IWL_UCODE_TLV_CAPA_BEACON_STORING: firmware will store the latest beacon
@@ -326,6 +330,9 @@ typedef unsigned int __bitwise__ iwl_ucode_tlv_capa_t;
  * @IWL_UCODE_TLV_CAPA_EXTEND_SHARED_MEM_CFG: support getting more shared
  *     memory addresses from the firmware.
  * @IWL_UCODE_TLV_CAPA_LQM_SUPPORT: supports Link Quality Measurement
+ * @IWL_UCODE_TLV_CAPA_TX_POWER_ACK: reduced TX power API has larger
+ *     command size (command version 4) that supports toggling ACK TX
+ *     power reduction.
  *
  * @NUM_IWL_UCODE_TLV_CAPA: number of bits used
  */
@@ -347,7 +354,7 @@ enum iwl_ucode_tlv_capa {
        IWL_UCODE_TLV_CAPA_DC2DC_CONFIG_SUPPORT         = (__force iwl_ucode_tlv_capa_t)19,
        IWL_UCODE_TLV_CAPA_CSUM_SUPPORT                 = (__force iwl_ucode_tlv_capa_t)21,
        IWL_UCODE_TLV_CAPA_RADIO_BEACON_STATS           = (__force iwl_ucode_tlv_capa_t)22,
-       IWL_UCODE_TLV_CAPA_P2P_STANDALONE_UAPSD         = (__force iwl_ucode_tlv_capa_t)26,
+       IWL_UCODE_TLV_CAPA_P2P_SCM_UAPSD                = (__force iwl_ucode_tlv_capa_t)26,
        IWL_UCODE_TLV_CAPA_BT_COEX_PLCR                 = (__force iwl_ucode_tlv_capa_t)28,
        IWL_UCODE_TLV_CAPA_LAR_MULTI_MCC                = (__force iwl_ucode_tlv_capa_t)29,
        IWL_UCODE_TLV_CAPA_BT_COEX_RRC                  = (__force iwl_ucode_tlv_capa_t)30,
@@ -356,6 +363,7 @@ enum iwl_ucode_tlv_capa {
        IWL_UCODE_TLV_CAPA_SHORT_PM_TIMEOUTS            = (__force iwl_ucode_tlv_capa_t)65,
        IWL_UCODE_TLV_CAPA_BT_MPLUT_SUPPORT             = (__force iwl_ucode_tlv_capa_t)67,
        IWL_UCODE_TLV_CAPA_MULTI_QUEUE_RX_SUPPORT       = (__force iwl_ucode_tlv_capa_t)68,
+       IWL_UCODE_TLV_CAPA_CSA_AND_TBTT_OFFLOAD         = (__force iwl_ucode_tlv_capa_t)70,
        IWL_UCODE_TLV_CAPA_BEACON_ANT_SELECTION         = (__force iwl_ucode_tlv_capa_t)71,
        IWL_UCODE_TLV_CAPA_BEACON_STORING               = (__force iwl_ucode_tlv_capa_t)72,
        IWL_UCODE_TLV_CAPA_LAR_SUPPORT_V2               = (__force iwl_ucode_tlv_capa_t)73,
@@ -365,6 +373,7 @@ enum iwl_ucode_tlv_capa {
        IWL_UCODE_TLV_CAPA_USNIFFER_UNIFIED             = (__force iwl_ucode_tlv_capa_t)77,
        IWL_UCODE_TLV_CAPA_EXTEND_SHARED_MEM_CFG        = (__force iwl_ucode_tlv_capa_t)80,
        IWL_UCODE_TLV_CAPA_LQM_SUPPORT                  = (__force iwl_ucode_tlv_capa_t)81,
+       IWL_UCODE_TLV_CAPA_TX_POWER_ACK                 = (__force iwl_ucode_tlv_capa_t)84,
 
        NUM_IWL_UCODE_TLV_CAPA
 #ifdef __CHECKER__
index e461d63..74ea68d 100644 (file)
@@ -67,7 +67,6 @@
 #ifndef __iwl_fw_h__
 #define __iwl_fw_h__
 #include <linux/types.h>
-#include <net/mac80211.h>
 
 #include "iwl-fw-file.h"
 #include "iwl-fw-error-dump.h"
@@ -230,6 +229,16 @@ struct iwl_gscan_capabilities {
        u32 max_number_of_black_listed_ssid;
 };
 
+/**
+ * enum iwl_fw_type - iwlwifi firmware type
+ * @IWL_FW_DVM: DVM firmware
+ * @IWL_FW_MVM: MVM firmware
+ */
+enum iwl_fw_type {
+       IWL_FW_DVM,
+       IWL_FW_MVM,
+};
+
 /**
  * struct iwl_fw - variables associated with the firmware
  *
@@ -244,7 +253,7 @@ struct iwl_gscan_capabilities {
  * @inst_evtlog_ptr: event log offset for runtime ucode.
  * @inst_evtlog_size: event log size for runtime ucode.
  * @inst_errlog_ptr: error log offfset for runtime ucode.
- * @mvm_fw: indicates this is MVM firmware
+ * @type: firmware type (&enum iwl_fw_type)
  * @cipher_scheme: optional external cipher scheme.
  * @human_readable: human readable version
  * @sdio_adma_addr: the default address to set for the ADMA in SDIO mode until
@@ -275,9 +284,9 @@ struct iwl_fw {
        u8 valid_tx_ant;
        u8 valid_rx_ant;
 
-       bool mvm_fw;
+       enum iwl_fw_type type;
 
-       struct ieee80211_cipher_scheme cs[IWL_UCODE_MAX_CS];
+       struct iwl_fw_cipher_scheme cs[IWL_UCODE_MAX_CS];
        u8 human_readable[FW_VER_HUMAN_READABLE_SZ];
 
        u32 sdio_adma_addr;
index 32c8f84..92c8b5f 100644 (file)
@@ -1,7 +1,7 @@
 /******************************************************************************
  *
  * Copyright(c) 2003 - 2014 Intel Corporation. All rights reserved.
- * Copyright(c) 2015 Intel Deutschland GmbH
+ * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
  *
  * Portions of this file are derived from the ipw3945 project.
  *
@@ -51,6 +51,14 @@ void iwl_write32(struct iwl_trans *trans, u32 ofs, u32 val)
 }
 IWL_EXPORT_SYMBOL(iwl_write32);
 
+void iwl_write64(struct iwl_trans *trans, u64 ofs, u64 val)
+{
+       trace_iwlwifi_dev_iowrite64(trans->dev, ofs, val);
+       iwl_trans_write32(trans, ofs, val & 0xffffffff);
+       iwl_trans_write32(trans, ofs + 4, val >> 32);
+}
+IWL_EXPORT_SYMBOL(iwl_write64);
+
 u32 iwl_read32(struct iwl_trans *trans, u32 ofs)
 {
        u32 val = iwl_trans_read32(trans, ofs);
@@ -102,6 +110,17 @@ void iwl_write_direct32(struct iwl_trans *trans, u32 reg, u32 value)
 }
 IWL_EXPORT_SYMBOL(iwl_write_direct32);
 
+void iwl_write_direct64(struct iwl_trans *trans, u64 reg, u64 value)
+{
+       unsigned long flags;
+
+       if (iwl_trans_grab_nic_access(trans, &flags)) {
+               iwl_write64(trans, reg, value);
+               iwl_trans_release_nic_access(trans, &flags);
+       }
+}
+IWL_EXPORT_SYMBOL(iwl_write_direct64);
+
 int iwl_poll_direct_bit(struct iwl_trans *trans, u32 addr, u32 mask,
                        int timeout)
 {
@@ -133,6 +152,14 @@ void iwl_write_prph_no_grab(struct iwl_trans *trans, u32 ofs, u32 val)
 }
 IWL_EXPORT_SYMBOL(iwl_write_prph_no_grab);
 
+void iwl_write_prph64_no_grab(struct iwl_trans *trans, u64 ofs, u64 val)
+{
+       trace_iwlwifi_dev_iowrite_prph64(trans->dev, ofs, val);
+       iwl_write_prph_no_grab(trans, ofs, val & 0xffffffff);
+       iwl_write_prph_no_grab(trans, ofs + 4, val >> 32);
+}
+IWL_EXPORT_SYMBOL(iwl_write_prph64_no_grab);
+
 u32 iwl_read_prph(struct iwl_trans *trans, u32 ofs)
 {
        unsigned long flags;
@@ -228,9 +255,117 @@ void iwl_force_nmi(struct iwl_trans *trans)
 }
 IWL_EXPORT_SYMBOL(iwl_force_nmi);
 
-static const char *get_fh_string(int cmd)
+static const char *get_rfh_string(int cmd)
 {
 #define IWL_CMD(x) case x: return #x
+#define IWL_CMD_MQ(arg, reg, q) { if (arg == reg(q)) return #reg; }
+
+       int i;
+
+       for (i = 0; i < IWL_MAX_RX_HW_QUEUES; i++) {
+               IWL_CMD_MQ(cmd, RFH_Q_FRBDCB_BA_LSB, i);
+               IWL_CMD_MQ(cmd, RFH_Q_FRBDCB_WIDX, i);
+               IWL_CMD_MQ(cmd, RFH_Q_FRBDCB_RIDX, i);
+               IWL_CMD_MQ(cmd, RFH_Q_URBD_STTS_WPTR_LSB, i);
+       };
+
+       switch (cmd) {
+       IWL_CMD(RFH_RXF_DMA_CFG);
+       IWL_CMD(RFH_GEN_CFG);
+       IWL_CMD(RFH_GEN_STATUS);
+       IWL_CMD(FH_TSSR_TX_STATUS_REG);
+       IWL_CMD(FH_TSSR_TX_ERROR_REG);
+       default:
+               return "UNKNOWN";
+       }
+#undef IWL_CMD_MQ
+}
+
+struct reg {
+       u32 addr;
+       bool is64;
+};
+
+static int iwl_dump_rfh(struct iwl_trans *trans, char **buf)
+{
+       int i, q;
+       int num_q = trans->num_rx_queues;
+       static const u32 rfh_tbl[] = {
+               RFH_RXF_DMA_CFG,
+               RFH_GEN_CFG,
+               RFH_GEN_STATUS,
+               FH_TSSR_TX_STATUS_REG,
+               FH_TSSR_TX_ERROR_REG,
+       };
+       static const struct reg rfh_mq_tbl[] = {
+               { RFH_Q0_FRBDCB_BA_LSB, true },
+               { RFH_Q0_FRBDCB_WIDX, false },
+               { RFH_Q0_FRBDCB_RIDX, false },
+               { RFH_Q0_URBD_STTS_WPTR_LSB, true },
+       };
+
+#ifdef CONFIG_IWLWIFI_DEBUGFS
+       if (buf) {
+               int pos = 0;
+               /*
+                * Register (up to 34 for name + 8 blank/q for MQ): 40 chars
+                * Colon + space: 2 characters
+                * 0X%08x: 10 characters
+                * New line: 1 character
+                * Total of 53 characters
+                */
+               size_t bufsz = ARRAY_SIZE(rfh_tbl) * 53 +
+                              ARRAY_SIZE(rfh_mq_tbl) * 53 * num_q + 40;
+
+               *buf = kmalloc(bufsz, GFP_KERNEL);
+               if (!*buf)
+                       return -ENOMEM;
+
+               pos += scnprintf(*buf + pos, bufsz - pos,
+                               "RFH register values:\n");
+
+               for (i = 0; i < ARRAY_SIZE(rfh_tbl); i++)
+                       pos += scnprintf(*buf + pos, bufsz - pos,
+                               "%40s: 0X%08x\n",
+                               get_rfh_string(rfh_tbl[i]),
+                               iwl_read_prph(trans, rfh_tbl[i]));
+
+               for (i = 0; i < ARRAY_SIZE(rfh_mq_tbl); i++)
+                       for (q = 0; q < num_q; q++) {
+                               u32 addr = rfh_mq_tbl[i].addr;
+
+                               addr += q * (rfh_mq_tbl[i].is64 ? 8 : 4);
+                               pos += scnprintf(*buf + pos, bufsz - pos,
+                                       "%34s(q %2d): 0X%08x\n",
+                                       get_rfh_string(addr), q,
+                                       iwl_read_prph(trans, addr));
+                       }
+
+               return pos;
+       }
+#endif
+
+       IWL_ERR(trans, "RFH register values:\n");
+       for (i = 0; i < ARRAY_SIZE(rfh_tbl); i++)
+               IWL_ERR(trans, "  %34s: 0X%08x\n",
+                       get_rfh_string(rfh_tbl[i]),
+                       iwl_read_prph(trans, rfh_tbl[i]));
+
+       for (i = 0; i < ARRAY_SIZE(rfh_mq_tbl); i++)
+               for (q = 0; q < num_q; q++) {
+                       u32 addr = rfh_mq_tbl[i].addr;
+
+                       addr += q * (rfh_mq_tbl[i].is64 ? 8 : 4);
+                       IWL_ERR(trans, "  %34s(q %d): 0X%08x\n",
+                               get_rfh_string(addr), q,
+                               iwl_read_prph(trans, addr));
+               }
+
+       return 0;
+}
+
+static const char *get_fh_string(int cmd)
+{
        switch (cmd) {
        IWL_CMD(FH_RSCSR_CHNL0_STTS_WPTR_REG);
        IWL_CMD(FH_RSCSR_CHNL0_RBDCB_BASE_REG);
@@ -262,6 +397,9 @@ int iwl_dump_fh(struct iwl_trans *trans, char **buf)
                FH_TSSR_TX_ERROR_REG
        };
 
+       if (trans->cfg->mq_rx_supported)
+               return iwl_dump_rfh(trans, buf);
+
 #ifdef CONFIG_IWLWIFI_DEBUGFS
        if (buf) {
                int pos = 0;
index a9bcc78..5c8c0e1 100644 (file)
@@ -34,6 +34,7 @@
 
 void iwl_write8(struct iwl_trans *trans, u32 ofs, u8 val);
 void iwl_write32(struct iwl_trans *trans, u32 ofs, u32 val);
+void iwl_write64(struct iwl_trans *trans, u64 ofs, u64 val);
 u32 iwl_read32(struct iwl_trans *trans, u32 ofs);
 
 static inline void iwl_set_bit(struct iwl_trans *trans, u32 reg, u32 mask)
@@ -53,11 +54,13 @@ int iwl_poll_direct_bit(struct iwl_trans *trans, u32 addr, u32 mask,
 
 u32 iwl_read_direct32(struct iwl_trans *trans, u32 reg);
 void iwl_write_direct32(struct iwl_trans *trans, u32 reg, u32 value);
+void iwl_write_direct64(struct iwl_trans *trans, u64 reg, u64 value);
 
 
 u32 iwl_read_prph_no_grab(struct iwl_trans *trans, u32 ofs);
 u32 iwl_read_prph(struct iwl_trans *trans, u32 ofs);
 void iwl_write_prph_no_grab(struct iwl_trans *trans, u32 ofs, u32 val);
+void iwl_write_prph64_no_grab(struct iwl_trans *trans, u64 ofs, u64 val);
 void iwl_write_prph(struct iwl_trans *trans, u32 ofs, u32 val);
 int iwl_poll_prph_bit(struct iwl_trans *trans, u32 addr,
                      u32 bits, u32 mask, int timeout);
index 6c5c2f9..4d32b10 100644 (file)
@@ -66,7 +66,6 @@
 #include <linux/types.h>
 #include <linux/spinlock.h>
 #include <linux/gfp.h>
-#include <net/mac80211.h>
 
 extern struct iwl_mod_params iwlwifi_mod_params;
 
@@ -87,9 +86,10 @@ enum iwl_disable_11n {
 };
 
 enum iwl_amsdu_size {
-       IWL_AMSDU_4K = 0,
-       IWL_AMSDU_8K = 1,
-       IWL_AMSDU_12K = 2,
+       IWL_AMSDU_DEF = 0,
+       IWL_AMSDU_4K = 1,
+       IWL_AMSDU_8K = 2,
+       IWL_AMSDU_12K = 3,
 };
 
 enum iwl_uapsd_disable {
@@ -105,7 +105,7 @@ enum iwl_uapsd_disable {
  * @sw_crypto: using hardware encryption, default = 0
  * @disable_11n: disable 11n capabilities, default = 0,
  *     use IWL_[DIS,EN]ABLE_HT_* constants
- * @amsdu_size: enable 8K amsdu size, default = 4K. enum iwl_amsdu_size.
+ * @amsdu_size: See &enum iwl_amsdu_size.
  * @restart_fw: restart firmware, default = 1
  * @bt_coex_active: enable bt coex, default = true
  * @led_mode: system default, default = 0
index 21653fe..43f8f7d 100644 (file)
@@ -397,6 +397,13 @@ static void iwl_init_vht_hw_capab(const struct iwl_cfg *cfg,
                vht_cap->cap |= IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN;
 
        switch (iwlwifi_mod_params.amsdu_size) {
+       case IWL_AMSDU_DEF:
+               if (cfg->mq_rx_supported)
+                       vht_cap->cap |=
+                               IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454;
+               else
+                       vht_cap->cap |= IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895;
+               break;
        case IWL_AMSDU_4K:
                vht_cap->cap |= IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895;
                break;
index 6c1d20d..459bf73 100644 (file)
@@ -417,5 +417,6 @@ enum {
 };
 
 #define UREG_CHICK             (0xA05C00)
+#define UREG_CHICK_MSI_ENABLE  BIT(24)
 #define UREG_CHICK_MSIX_ENABLE BIT(25)
 #endif                         /* __iwl_prph_h__ */
index 8193d36..5535e22 100644 (file)
@@ -211,6 +211,9 @@ struct iwl_cmd_header_wide {
 #define FH_RSCSR_FRAME_SIZE_MSK                0x00003FFF      /* bits 0-13 */
 #define FH_RSCSR_FRAME_INVALID         0x55550000
 #define FH_RSCSR_FRAME_ALIGN           0x40
+#define FH_RSCSR_RPA_EN                        BIT(25)
+#define FH_RSCSR_RXQ_POS               16
+#define FH_RSCSR_RXQ_MASK              0x3F0000
 
 struct iwl_rx_packet {
        /*
@@ -220,7 +223,13 @@ struct iwl_rx_packet {
         * 31:    flag flush RB request
         * 30:    flag ignore TC (terminal counter) request
         * 29:    flag fast IRQ request
-        * 28-14: Reserved
+        * 28-26: Reserved
+        * 25:    Offload enabled
+        * 24:    RPF enabled
+        * 23:    RSS enabled
+        * 22:    Checksum enabled
+        * 21-16: RX queue
+        * 15-14: Reserved
         * 13-00: RX frame size
         */
        __le32 len_n_flags;
@@ -383,11 +392,6 @@ static inline void iwl_free_rxb(struct iwl_rx_cmd_buffer *r)
 
 #define MAX_NO_RECLAIM_CMDS    6
 
-/*
- * The first entry in driver_data array in ieee80211_tx_info
- * that can be used by the transport.
- */
-#define IWL_TRANS_FIRST_DRIVER_DATA 2
 #define IWL_MASK(lo, hi) ((1 << (hi)) | ((1 << (hi)) - (1 << (lo))))
 
 /*
@@ -491,6 +495,8 @@ struct iwl_hcmd_arr {
  * @command_groups_size: number of command groups, to avoid illegal access
  * @sdio_adma_addr: the default address to set for the ADMA in SDIO mode until
  *     we get the ALIVE from the uCode
+ * @cb_data_offs: offset inside skb->cb to store transport data at, must have
+ *     space for at least two pointers
  */
 struct iwl_trans_config {
        struct iwl_op_mode *op_mode;
@@ -510,6 +516,8 @@ struct iwl_trans_config {
        int command_groups_size;
 
        u32 sdio_adma_addr;
+
+       u8 cb_data_offs;
 };
 
 struct iwl_trans_dump_data {
@@ -574,6 +582,7 @@ struct iwl_trans_txq_scd_cfg {
  *     configured. May sleep.
  * @txq_disable: de-configure a Tx queue to send AMPDUs
  *     Must be atomic
+ * @txq_set_shared_mode: change Tx queue shared/unshared marking
  * @wait_tx_queue_empty: wait until tx queues are empty. May sleep.
  * @freeze_txq_timer: prevents the timer of the queue from firing until the
  *     queue is set to awake. Must be atomic.
@@ -637,6 +646,9 @@ struct iwl_trans_ops {
        void (*txq_disable)(struct iwl_trans *trans, int queue,
                            bool configure_scd);
 
+       void (*txq_set_shared_mode)(struct iwl_trans *trans, u32 txq_id,
+                                   bool shared);
+
        int (*wait_tx_queue_empty)(struct iwl_trans *trans, u32 txq_bm);
        void (*freeze_txq_timer)(struct iwl_trans *trans, unsigned long txqs,
                                 bool freeze);
@@ -749,6 +761,7 @@ enum iwl_plat_pm_mode {
  * @ops - pointer to iwl_trans_ops
  * @op_mode - pointer to the op_mode
  * @cfg - pointer to the configuration
+ * @drv - pointer to iwl_drv
  * @status: a bit-mask of transport status flags
  * @dev - pointer to struct device * that represents the device
  * @max_skb_frags: maximum number of fragments an SKB can have when transmitted.
@@ -792,6 +805,7 @@ struct iwl_trans {
        const struct iwl_trans_ops *ops;
        struct iwl_op_mode *op_mode;
        const struct iwl_cfg *cfg;
+       struct iwl_drv *drv;
        enum iwl_trans_state state;
        unsigned long status;
 
@@ -1052,6 +1066,13 @@ iwl_trans_txq_enable_cfg(struct iwl_trans *trans, int queue, u16 ssn,
        trans->ops->txq_enable(trans, queue, ssn, cfg, queue_wdg_timeout);
 }
 
+static inline void iwl_trans_txq_set_shared_mode(struct iwl_trans *trans,
+                                                int queue, bool shared_mode)
+{
+       if (trans->ops->txq_set_shared_mode)
+               trans->ops->txq_set_shared_mode(trans, queue, shared_mode);
+}
+
 static inline void iwl_trans_txq_enable(struct iwl_trans *trans, int queue,
                                        int fifo, int sta_id, int tid,
                                        int frame_limit, u16 ssn,
index a63f5bb..5bdb6c2 100644 (file)
@@ -142,7 +142,7 @@ static const __le64 iwl_ci_mask[][3] = {
                cpu_to_le64(0x0)
        },
        {
-               cpu_to_le64(0xFFC0000000ULL),
+               cpu_to_le64(0xFE00000000ULL),
                cpu_to_le64(0x0ULL),
                cpu_to_le64(0x0ULL)
        },
@@ -615,8 +615,8 @@ static void iwl_mvm_bt_notif_iterator(void *_data, u8 *mac,
         * don't reduce the Tx power if one of these is true:
         *  we are in LOOSE
         *  single share antenna product
-        *  BT is active
-        *  we are associated
+        *  BT is inactive
+        *  we are not associated
         */
        if (iwl_get_coex_type(mvm, vif) == BT_COEX_LOOSE_LUT ||
            mvm->cfg->bt_shared_single_ant || !vif->bss_conf.assoc ||
index 406cf1c..b344898 100644 (file)
@@ -1020,6 +1020,8 @@ static ssize_t iwl_dbgfs_max_amsdu_len_write(struct iwl_mvm *mvm,
        int ret;
 
        ret = kstrtouint(buf, 0, &max_amsdu_len);
+       if (ret)
+               return ret;
 
        if (max_amsdu_len > IEEE80211_MAX_MPDU_LEN_VHT_11454)
                return -EINVAL;
index 2a33b69..204c1b1 100644 (file)
 
 #define BITS(nb) (BIT(nb) - 1)
 
-/**
- * enum iwl_bt_coex_flags - flags for BT_COEX command
- * @BT_COEX_MODE_POS:
- * @BT_COEX_MODE_MSK:
- * @BT_COEX_DISABLE_OLD:
- * @BT_COEX_2W_OLD:
- * @BT_COEX_3W_OLD:
- * @BT_COEX_NW_OLD:
- * @BT_COEX_AUTO_OLD:
- * @BT_COEX_BT_OLD: Antenna is for BT (manufacuring tests)
- * @BT_COEX_WIFI_OLD: Antenna is for BT (manufacuring tests)
- * @BT_COEX_SYNC2SCO:
- * @BT_COEX_CORUNNING:
- * @BT_COEX_MPLUT:
- * @BT_COEX_TTC:
- * @BT_COEX_RRC:
- *
- * The COEX_MODE must be set for each command. Even if it is not changed.
- */
-enum iwl_bt_coex_flags {
-       BT_COEX_MODE_POS                = 3,
-       BT_COEX_MODE_MSK                = BITS(3) << BT_COEX_MODE_POS,
-       BT_COEX_DISABLE_OLD             = 0x0 << BT_COEX_MODE_POS,
-       BT_COEX_2W_OLD                  = 0x1 << BT_COEX_MODE_POS,
-       BT_COEX_3W_OLD                  = 0x2 << BT_COEX_MODE_POS,
-       BT_COEX_NW_OLD                  = 0x3 << BT_COEX_MODE_POS,
-       BT_COEX_AUTO_OLD                = 0x5 << BT_COEX_MODE_POS,
-       BT_COEX_BT_OLD                  = 0x6 << BT_COEX_MODE_POS,
-       BT_COEX_WIFI_OLD                = 0x7 << BT_COEX_MODE_POS,
-       BT_COEX_SYNC2SCO                = BIT(7),
-       BT_COEX_CORUNNING               = BIT(8),
-       BT_COEX_MPLUT                   = BIT(9),
-       BT_COEX_TTC                     = BIT(20),
-       BT_COEX_RRC                     = BIT(21),
-};
-
-/*
- * indicates what has changed in the BT_COEX command.
- * BT_VALID_ENABLE must be set for each command. Commands without this bit will
- * discarded by the firmware
- */
-enum iwl_bt_coex_valid_bit_msk {
-       BT_VALID_ENABLE                 = BIT(0),
-       BT_VALID_BT_PRIO_BOOST          = BIT(1),
-       BT_VALID_MAX_KILL               = BIT(2),
-       BT_VALID_3W_TMRS                = BIT(3),
-       BT_VALID_KILL_ACK               = BIT(4),
-       BT_VALID_KILL_CTS               = BIT(5),
-       BT_VALID_REDUCED_TX_POWER       = BIT(6),
-       BT_VALID_LUT                    = BIT(7),
-       BT_VALID_WIFI_RX_SW_PRIO_BOOST  = BIT(8),
-       BT_VALID_WIFI_TX_SW_PRIO_BOOST  = BIT(9),
-       BT_VALID_MULTI_PRIO_LUT         = BIT(10),
-       BT_VALID_TRM_KICK_FILTER        = BIT(11),
-       BT_VALID_CORUN_LUT_20           = BIT(12),
-       BT_VALID_CORUN_LUT_40           = BIT(13),
-       BT_VALID_ANT_ISOLATION          = BIT(14),
-       BT_VALID_ANT_ISOLATION_THRS     = BIT(15),
-       BT_VALID_TXTX_DELTA_FREQ_THRS   = BIT(16),
-       BT_VALID_TXRX_MAX_FREQ_0        = BIT(17),
-       BT_VALID_SYNC_TO_SCO            = BIT(18),
-       BT_VALID_TTC                    = BIT(20),
-       BT_VALID_RRC                    = BIT(21),
-};
-
-/**
- * enum iwl_bt_reduced_tx_power - allows to reduce txpower for WiFi frames.
- * @BT_REDUCED_TX_POWER_CTL: reduce Tx power for control frames
- * @BT_REDUCED_TX_POWER_DATA: reduce Tx power for data frames
- *
- * This mechanism allows to have BT and WiFi run concurrently. Since WiFi
- * reduces its Tx power, it can work along with BT, hence reducing the amount
- * of WiFi frames being killed by BT.
- */
-enum iwl_bt_reduced_tx_power {
-       BT_REDUCED_TX_POWER_CTL         = BIT(0),
-       BT_REDUCED_TX_POWER_DATA        = BIT(1),
-};
-
 enum iwl_bt_coex_lut_type {
        BT_COEX_TIGHT_LUT = 0,
        BT_COEX_LOOSE_LUT,
@@ -158,64 +79,9 @@ enum iwl_bt_coex_lut_type {
        BT_COEX_INVALID_LUT = 0xff,
 }; /* BT_COEX_DECISION_LUT_INDEX_API_E_VER_1 */
 
-#define BT_COEX_LUT_SIZE (12)
 #define BT_COEX_CORUN_LUT_SIZE (32)
-#define BT_COEX_MULTI_PRIO_LUT_SIZE (2)
-#define BT_COEX_BOOST_SIZE (4)
 #define BT_REDUCED_TX_POWER_BIT BIT(7)
 
-/**
- * struct iwl_bt_coex_cmd_old - bt coex configuration command
- * @flags:&enum iwl_bt_coex_flags
- * @max_kill:
- * @bt_reduced_tx_power: enum %iwl_bt_reduced_tx_power
- * @override_primary_lut: enum %iwl_bt_coex_lut_type: BT_COEX_INVALID_LUT
- *     should be set by default
- * @override_secondary_lut: enum %iwl_bt_coex_lut_type: BT_COEX_INVALID_LUT
- *     should be set by default
- * @bt4_antenna_isolation: antenna isolation
- * @bt4_antenna_isolation_thr: antenna threshold value
- * @bt4_tx_tx_delta_freq_thr: TxTx delta frequency
- * @bt4_tx_rx_max_freq0: TxRx max frequency
- * @bt_prio_boost: BT priority boost registers
- * @wifi_tx_prio_boost: SW boost of wifi tx priority
- * @wifi_rx_prio_boost: SW boost of wifi rx priority
- * @kill_ack_msk: kill ACK mask. 1 - Tx ACK, 0 - kill Tx of ACK.
- * @kill_cts_msk: kill CTS mask. 1 - Tx CTS, 0 - kill Tx of CTS.
- * @decision_lut: PTA decision LUT, per Prio-Ch
- * @bt4_multiprio_lut: multi priority LUT configuration
- * @bt4_corun_lut20: co-running 20 MHz LUT configuration
- * @bt4_corun_lut40: co-running 40 MHz LUT configuration
- * @valid_bit_msk: enum %iwl_bt_coex_valid_bit_msk
- *
- * The structure is used for the BT_COEX command.
- */
-struct iwl_bt_coex_cmd_old {
-       __le32 flags;
-       u8 max_kill;
-       u8 bt_reduced_tx_power;
-       u8 override_primary_lut;
-       u8 override_secondary_lut;
-
-       u8 bt4_antenna_isolation;
-       u8 bt4_antenna_isolation_thr;
-       u8 bt4_tx_tx_delta_freq_thr;
-       u8 bt4_tx_rx_max_freq0;
-
-       __le32 bt_prio_boost[BT_COEX_BOOST_SIZE];
-       __le32 wifi_tx_prio_boost;
-       __le32 wifi_rx_prio_boost;
-       __le32 kill_ack_msk;
-       __le32 kill_cts_msk;
-
-       __le32 decision_lut[BT_COEX_MAX_LUT][BT_COEX_LUT_SIZE];
-       __le32 bt4_multiprio_lut[BT_COEX_MULTI_PRIO_LUT_SIZE];
-       __le32 bt4_corun_lut20[BT_COEX_CORUN_LUT_SIZE];
-       __le32 bt4_corun_lut40[BT_COEX_CORUN_LUT_SIZE];
-
-       __le32 valid_bit_msk;
-} __packed; /* BT_COEX_CMD_API_S_VER_5 */
-
 enum iwl_bt_coex_mode {
        BT_COEX_DISABLE                 = 0x0,
        BT_COEX_NW                      = 0x1,
@@ -385,92 +251,4 @@ struct iwl_bt_coex_profile_notif {
        u8 reserved[3];
 } __packed; /* BT_COEX_PROFILE_NTFY_API_S_VER_4 */
 
-enum iwl_bt_coex_prio_table_event {
-       BT_COEX_PRIO_TBL_EVT_INIT_CALIB1                = 0,
-       BT_COEX_PRIO_TBL_EVT_INIT_CALIB2                = 1,
-       BT_COEX_PRIO_TBL_EVT_PERIODIC_CALIB_LOW1        = 2,
-       BT_COEX_PRIO_TBL_EVT_PERIODIC_CALIB_LOW2        = 3,
-       BT_COEX_PRIO_TBL_EVT_PERIODIC_CALIB_HIGH1       = 4,
-       BT_COEX_PRIO_TBL_EVT_PERIODIC_CALIB_HIGH2       = 5,
-       BT_COEX_PRIO_TBL_EVT_DTIM                       = 6,
-       BT_COEX_PRIO_TBL_EVT_SCAN52                     = 7,
-       BT_COEX_PRIO_TBL_EVT_SCAN24                     = 8,
-       BT_COEX_PRIO_TBL_EVT_IDLE                       = 9,
-       BT_COEX_PRIO_TBL_EVT_MAX                        = 16,
-}; /* BT_COEX_PRIO_TABLE_EVENTS_API_E_VER_1 */
-
-enum iwl_bt_coex_prio_table_prio {
-       BT_COEX_PRIO_TBL_DISABLED       = 0,
-       BT_COEX_PRIO_TBL_PRIO_LOW       = 1,
-       BT_COEX_PRIO_TBL_PRIO_HIGH      = 2,
-       BT_COEX_PRIO_TBL_PRIO_BYPASS    = 3,
-       BT_COEX_PRIO_TBL_PRIO_COEX_OFF  = 4,
-       BT_COEX_PRIO_TBL_PRIO_COEX_ON   = 5,
-       BT_COEX_PRIO_TBL_PRIO_COEX_IDLE = 6,
-       BT_COEX_PRIO_TBL_MAX            = 8,
-}; /* BT_COEX_PRIO_TABLE_PRIORITIES_API_E_VER_1 */
-
-#define BT_COEX_PRIO_TBL_SHRD_ANT_POS     (0)
-#define BT_COEX_PRIO_TBL_PRIO_POS         (1)
-#define BT_COEX_PRIO_TBL_RESERVED_POS     (4)
-
-/**
- * struct iwl_bt_coex_prio_tbl_cmd - priority table for BT coex
- * @prio_tbl:
- */
-struct iwl_bt_coex_prio_tbl_cmd {
-       u8 prio_tbl[BT_COEX_PRIO_TBL_EVT_MAX];
-} __packed;
-
-/**
- * struct iwl_bt_coex_ci_cmd_old - bt coex channel inhibition command
- * @bt_primary_ci:
- * @bt_secondary_ci:
- * @co_run_bw_primary:
- * @co_run_bw_secondary:
- * @primary_ch_phy_id:
- * @secondary_ch_phy_id:
- *
- * Used for BT_COEX_CI command
- */
-struct iwl_bt_coex_ci_cmd_old {
-       __le64 bt_primary_ci;
-       __le64 bt_secondary_ci;
-
-       u8 co_run_bw_primary;
-       u8 co_run_bw_secondary;
-       u8 primary_ch_phy_id;
-       u8 secondary_ch_phy_id;
-} __packed; /* BT_CI_MSG_API_S_VER_1 */
-
-/**
- * struct iwl_bt_coex_profile_notif_old - notification about BT coex
- * @mbox_msg: message from BT to WiFi
- * @msg_idx: the index of the message
- * @bt_status: 0 - off, 1 - on
- * @bt_open_conn: number of BT connections open
- * @bt_traffic_load: load of BT traffic
- * @bt_agg_traffic_load: aggregated load of BT traffic
- * @bt_ci_compliance: 0 - no CI compliance, 1 - CI compliant
- * @primary_ch_lut: LUT used for primary channel
- * @secondary_ch_lut: LUT used for secondary channel
- * @bt_activity_grading: the activity of BT enum %iwl_bt_activity_grading
- */
-struct iwl_bt_coex_profile_notif_old {
-       __le32 mbox_msg[4];
-       __le32 msg_idx;
-       u8 bt_status;
-       u8 bt_open_conn;
-       u8 bt_traffic_load;
-       u8 bt_agg_traffic_load;
-       u8 bt_ci_compliance;
-       u8 ttc_enabled;
-       u8 rrc_enabled;
-       u8 reserved;
-
-       __le32 primary_ch_lut;
-       __le32 secondary_ch_lut;
-       __le32 bt_activity_grading;
-} __packed; /* BT_COEX_PROFILE_NTFY_API_S_VER_3 */
-
 #endif /* __fw_api_bt_coex_h__ */
index 95ac59d..0246506 100644 (file)
@@ -72,6 +72,9 @@
 #define NUM_MAC_INDEX_DRIVER   MAC_INDEX_AUX
 #define NUM_MAC_INDEX          (MAC_INDEX_AUX + 1)
 
+#define IWL_MVM_STATION_COUNT  16
+#define IWL_MVM_TDLS_STA_COUNT 4
+
 enum iwl_ac {
        AC_BK,
        AC_BE,
index 65a7c8a..404b0de 100644 (file)
@@ -7,7 +7,7 @@
  *
  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
  * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
- * Copyright(c) 2015 Intel Deutschland GmbH
+ * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of version 2 of the GNU General Public License as
@@ -34,7 +34,7 @@
  *
  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
  * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
- * Copyright(c) 2015 Intel Deutschland GmbH
+ * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -310,7 +310,8 @@ enum iwl_dev_tx_power_cmd_mode {
        IWL_TX_POWER_MODE_SET_MAC = 0,
        IWL_TX_POWER_MODE_SET_DEVICE = 1,
        IWL_TX_POWER_MODE_SET_CHAINS = 2,
-}; /* TX_POWER_REDUCED_FLAGS_TYPE_API_E_VER_2 */;
+       IWL_TX_POWER_MODE_SET_ACK = 3,
+}; /* TX_POWER_REDUCED_FLAGS_TYPE_API_E_VER_4 */;
 
 /**
  * struct iwl_dev_tx_power_cmd_v2 - TX power reduction command
@@ -338,7 +339,7 @@ struct iwl_dev_tx_power_cmd_v2 {
  * @v2: version 2 of the command, embedded here for easier software handling
  * @per_chain_restriction: per chain restrictions
  */
-struct iwl_dev_tx_power_cmd {
+struct iwl_dev_tx_power_cmd_v3 {
        /* v3 is just an extension of v2 - keep this here */
        struct iwl_dev_tx_power_cmd_v2 v2;
        __le16 per_chain_restriction[IWL_NUM_CHAIN_LIMITS][IWL_NUM_SUB_BANDS];
@@ -346,6 +347,19 @@ struct iwl_dev_tx_power_cmd {
 
 #define IWL_DEV_MAX_TX_POWER 0x7FFF
 
+/**
+ * struct iwl_dev_tx_power_cmd - TX power reduction command
+ * @v3: version 3 of the command, embedded here for easier software handling
+ * @enable_ack_reduction: enable or disable close range ack TX power
+ *     reduction.
+ */
+struct iwl_dev_tx_power_cmd {
+       /* v4 is just an extension of v3 - keep this here */
+       struct iwl_dev_tx_power_cmd_v3 v3;
+       u8 enable_ack_reduction;
+       u8 reserved[3];
+} __packed; /* TX_REDUCED_POWER_API_S_VER_4 */
+
 /**
  * struct iwl_beacon_filter_cmd
  * REPLY_BEACON_FILTERING_CMD = 0xd2 (command)
index 1ca8e49..acc5cd5 100644 (file)
@@ -296,7 +296,7 @@ enum iwl_rx_mpdu_status {
        IWL_RX_MPDU_STATUS_OVERRUN_OK           = BIT(1),
        IWL_RX_MPDU_STATUS_SRC_STA_FOUND        = BIT(2),
        IWL_RX_MPDU_STATUS_KEY_VALID            = BIT(3),
-       IWL_RX_MPDU_STATUS_KEY_ERROR            = BIT(4),
+       IWL_RX_MPDU_STATUS_KEY_PARAM_OK         = BIT(4),
        IWL_RX_MPDU_STATUS_ICV_OK               = BIT(5),
        IWL_RX_MPDU_STATUS_MIC_OK               = BIT(6),
        IWL_RX_MPDU_RES_STATUS_TTAK_OK          = BIT(7),
@@ -311,7 +311,7 @@ enum iwl_rx_mpdu_status {
        IWL_RX_MPDU_STATUS_WEP_MATCH            = BIT(12),
        IWL_RX_MPDU_STATUS_EXT_IV_MATCH         = BIT(13),
        IWL_RX_MPDU_STATUS_KEY_ID_MATCH         = BIT(14),
-       IWL_RX_MPDU_STATUS_KEY_COLOR            = BIT(15),
+       IWL_RX_MPDU_STATUS_ROBUST_MNG_FRAME     = BIT(15),
 };
 
 enum iwl_rx_mpdu_hash_filter {
@@ -336,6 +336,18 @@ enum iwl_rx_mpdu_reorder_data {
        IWL_RX_MPDU_REORDER_BA_OLD_SN           = 0x80000000,
 };
 
+enum iwl_rx_mpdu_phy_info {
+       IWL_RX_MPDU_PHY_AMPDU           = BIT(5),
+       IWL_RX_MPDU_PHY_AMPDU_TOGGLE    = BIT(6),
+       IWL_RX_MPDU_PHY_SHORT_PREAMBLE  = BIT(7),
+       IWL_RX_MPDU_PHY_TSF_OVERLOAD    = BIT(8),
+};
+
+enum iwl_rx_mpdu_mac_info {
+       IWL_RX_MPDU_PHY_MAC_INDEX_MASK          = 0x0f,
+       IWL_RX_MPDU_PHY_PHY_INDEX_MASK          = 0xf0,
+};
+
 struct iwl_rx_mpdu_desc {
        /* DW2 */
        __le16 mpdu_len;
@@ -343,9 +355,9 @@ struct iwl_rx_mpdu_desc {
        u8 mac_flags2;
        /* DW3 */
        u8 amsdu_info;
-       __le16 reserved_for_software;
+       __le16 phy_info;
        u8 mac_phy_idx;
-       /* DW4 */
+       /* DW4 - carries csum data only when rpa_en == 1 */
        __le16 raw_csum; /* alledgedly unreliable */
        __le16 l3l4_flags;
        /* DW5 */
@@ -354,17 +366,17 @@ struct iwl_rx_mpdu_desc {
        u8 sta_id_flags;
        /* DW6 */
        __le32 reorder_data;
-       /* DW7 */
+       /* DW7 - carries rss_hash only when rpa_en == 1 */
        __le32 rss_hash;
-       /* DW8 */
+       /* DW8 - carries filter_match only when rpa_en == 1 */
        __le32 filter_match;
        /* DW9 */
        __le32 rate_n_flags;
        /* DW10 */
-       u8 energy_a, energy_b, channel, reserved;
+       u8 energy_a, energy_b, channel, mac_context;
        /* DW11 */
        __le32 gp2_on_air_rise;
-       /* DW12 & DW13 */
+       /* DW12 & DW13 - carries TSF only TSF_OVERLOAD bit == 0 */
        __le64 tsf_on_air_rise;
 } __packed;
 
@@ -435,26 +447,26 @@ struct iwl_rxq_sync_notification {
 } __packed; /* MULTI_QUEUE_DRV_SYNC_HDR_CMD_API_S_VER_1 */
 
 /**
-* Internal message identifier
-*
-* @IWL_MVM_RXQ_EMPTY: empty sync notification
-* @IWL_MVM_RXQ_NOTIF_DEL_BA: notify RSS queues of delBA
-*/
+ * Internal message identifier
+ *
+ * @IWL_MVM_RXQ_EMPTY: empty sync notification
+ * @IWL_MVM_RXQ_NOTIF_DEL_BA: notify RSS queues of delBA
+ */
 enum iwl_mvm_rxq_notif_type {
        IWL_MVM_RXQ_EMPTY,
        IWL_MVM_RXQ_NOTIF_DEL_BA,
 };
 
 /**
-* struct iwl_mvm_internal_rxq_notif - Internal representation of the data sent
-* in &iwl_rxq_sync_cmd. Should be DWORD aligned.
-* FW is agnostic to the payload, so there are no endianity requirements.
-*
-* @type: value from &iwl_mvm_rxq_notif_type
-* @sync: ctrl path is waiting for all notifications to be received
-* @cookie: internal cookie to identify old notifications
-* @data: payload
-*/
+ * struct iwl_mvm_internal_rxq_notif - Internal representation of the data sent
+ * in &iwl_rxq_sync_cmd. Should be DWORD aligned.
+ * FW is agnostic to the payload, so there are no endianity requirements.
+ *
+ * @type: value from &iwl_mvm_rxq_notif_type
+ * @sync: ctrl path is waiting for all notifications to be received
+ * @cookie: internal cookie to identify old notifications
+ * @data: payload
+ */
 struct iwl_mvm_internal_rxq_notif {
        u16 type;
        u16 sync;
index 38b1d04..d1c4fb8 100644 (file)
@@ -141,6 +141,7 @@ enum iwl_sta_flags {
  * @STA_KEY_FLG_CCM: CCMP encryption algorithm
  * @STA_KEY_FLG_TKIP: TKIP encryption algorithm
  * @STA_KEY_FLG_EXT: extended cipher algorithm (depends on the FW support)
+ * @STA_KEY_FLG_GCMP: GCMP encryption algorithm
  * @STA_KEY_FLG_CMAC: CMAC encryption algorithm
  * @STA_KEY_FLG_ENC_UNKNOWN: unknown encryption algorithm
  * @STA_KEY_FLG_EN_MSK: mask for encryption algorithmi value
@@ -149,6 +150,7 @@ enum iwl_sta_flags {
  * @STA_KEY_FLG_KEYID_MSK: the index of the key
  * @STA_KEY_NOT_VALID: key is invalid
  * @STA_KEY_FLG_WEP_13BYTES: set for 13 bytes WEP key
+ * @STA_KEY_FLG_KEY_32BYTES for non-wep key set for 32 bytes key
  * @STA_KEY_MULTICAST: set for multical key
  * @STA_KEY_MFP: key is used for Management Frame Protection
  */
@@ -158,6 +160,7 @@ enum iwl_sta_key_flag {
        STA_KEY_FLG_CCM                 = (2 << 0),
        STA_KEY_FLG_TKIP                = (3 << 0),
        STA_KEY_FLG_EXT                 = (4 << 0),
+       STA_KEY_FLG_GCMP                = (5 << 0),
        STA_KEY_FLG_CMAC                = (6 << 0),
        STA_KEY_FLG_ENC_UNKNOWN         = (7 << 0),
        STA_KEY_FLG_EN_MSK              = (7 << 0),
@@ -167,6 +170,7 @@ enum iwl_sta_key_flag {
        STA_KEY_FLG_KEYID_MSK           = (3 << STA_KEY_FLG_KEYID_POS),
        STA_KEY_NOT_VALID               = BIT(11),
        STA_KEY_FLG_WEP_13BYTES         = BIT(12),
+       STA_KEY_FLG_KEY_32BYTES         = BIT(12),
        STA_KEY_MULTICAST               = BIT(14),
        STA_KEY_MFP                     = BIT(15),
 };
@@ -388,7 +392,6 @@ struct iwl_mvm_add_sta_cmd {
  * @key_offset: key offset in key storage
  * @key_flags: type %iwl_sta_key_flag
  * @key: key material data
- * @key2: key material data
  * @rx_secur_seq_cnt: RX security sequence counter for the key
  * @tkip_rx_tsc_byte2: TSC[2] for key mix ph1 detection
  * @tkip_rx_ttak: 10-byte unicast TKIP TTAK for Rx
@@ -397,8 +400,7 @@ struct iwl_mvm_add_sta_key_cmd {
        u8 sta_id;
        u8 key_offset;
        __le16 key_flags;
-       u8 key[16];
-       u8 key2[16];
+       u8 key[32];
        u8 rx_secur_seq_cnt[16];
        u8 tkip_rx_tsc_byte2;
        u8 reserved;
index 438665a..4e638a4 100644 (file)
@@ -7,6 +7,7 @@
  *
  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+ * Copyright(c) 2016 Intel Deutschland GmbH
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of version 2 of the GNU General Public License as
@@ -252,6 +253,20 @@ struct mvm_statistics_general_v8 {
        u8 reserved[4 - (NUM_MAC_INDEX % 4)];
 } __packed; /* STATISTICS_GENERAL_API_S_VER_8 */
 
+/**
+ * struct mvm_statistics_load - RX statistics for multi-queue devices
+ * @air_time: accumulated air time, per mac
+ * @byte_count: accumulated byte count, per mac
+ * @pkt_count: accumulated packet count, per mac
+ * @avg_energy: average RSSI, per station
+ */
+struct mvm_statistics_load {
+       __le32 air_time[NUM_MAC_INDEX];
+       __le32 byte_count[NUM_MAC_INDEX];
+       __le32 pkt_count[NUM_MAC_INDEX];
+       u8 avg_energy[IWL_MVM_STATION_COUNT];
+} __packed; /* STATISTICS_RX_MAC_STATION_S_VER_1 */
+
 struct mvm_statistics_rx {
        struct mvm_statistics_rx_phy ofdm;
        struct mvm_statistics_rx_phy cck;
@@ -266,7 +281,6 @@ struct mvm_statistics_rx {
  * while associated.  To disable this behavior, set DISABLE_NOTIF flag in the
  * STATISTICS_CMD (0x9c), below.
  */
-
 struct iwl_notif_statistics_v10 {
        __le32 flag;
        struct mvm_statistics_rx rx;
@@ -274,6 +288,14 @@ struct iwl_notif_statistics_v10 {
        struct mvm_statistics_general_v8 general;
 } __packed; /* STATISTICS_NTFY_API_S_VER_10 */
 
+struct iwl_notif_statistics_v11 {
+       __le32 flag;
+       struct mvm_statistics_rx rx;
+       struct mvm_statistics_tx tx;
+       struct mvm_statistics_general_v8 general;
+       struct mvm_statistics_load load_stats;
+} __packed; /* STATISTICS_NTFY_API_S_VER_11 */
+
 #define IWL_STATISTICS_FLG_CLEAR               0x1
 #define IWL_STATISTICS_FLG_DISABLE_NOTIF       0x2
 
index dadcccd..4144623 100644 (file)
@@ -137,17 +137,32 @@ enum iwl_tx_pm_timeouts {
        PM_FRAME_ASSOC          = 3,
 };
 
-/*
- * TX command security control
- */
-#define TX_CMD_SEC_WEP                 0x01
-#define TX_CMD_SEC_CCM                 0x02
-#define TX_CMD_SEC_TKIP                        0x03
-#define TX_CMD_SEC_EXT                 0x04
 #define TX_CMD_SEC_MSK                 0x07
 #define TX_CMD_SEC_WEP_KEY_IDX_POS     6
 #define TX_CMD_SEC_WEP_KEY_IDX_MSK     0xc0
-#define TX_CMD_SEC_KEY128              0x08
+
+/**
+ * enum iwl_tx_cmd_sec_ctrl - bitmasks for security control in TX command
+ * @TX_CMD_SEC_WEP: WEP encryption algorithm.
+ * @TX_CMD_SEC_CCM: CCM encryption algorithm.
+ * @TX_CMD_SEC_TKIP: TKIP encryption algorithm.
+ * @TX_CMD_SEC_EXT: extended cipher algorithm.
+ * @TX_CMD_SEC_GCMP: GCMP encryption algorithm.
+ * @TX_CMD_SEC_KEY128: set for 104 bits WEP key.
+ * @TC_CMD_SEC_KEY_FROM_TABLE: for a non-WEP key, set if the key should be taken
+ *     from the table instead of from the TX command.
+ *     If the key is taken from the key table its index should be given by the
+ *     first byte of the TX command key field.
+ */
+enum iwl_tx_cmd_sec_ctrl {
+       TX_CMD_SEC_WEP                  = 0x01,
+       TX_CMD_SEC_CCM                  = 0x02,
+       TX_CMD_SEC_TKIP                 = 0x03,
+       TX_CMD_SEC_EXT                  = 0x04,
+       TX_CMD_SEC_GCMP                 = 0x05,
+       TX_CMD_SEC_KEY128               = 0x08,
+       TC_CMD_SEC_KEY_FROM_TABLE       = 0x08,
+};
 
 /* TODO: how does these values are OK with only 16 bit variable??? */
 /*
@@ -562,8 +577,8 @@ struct iwl_mvm_ba_notif {
        u8 reserved1;
 } __packed;
 
-/*
- * struct iwl_mac_beacon_cmd - beacon template command
+/**
+ * struct iwl_mac_beacon_cmd_v6 - beacon template command
  * @tx: the tx commands associated with the beacon frame
  * @template_id: currently equal to the mac context id of the coresponding
  *  mac.
@@ -571,13 +586,34 @@ struct iwl_mvm_ba_notif {
  * @tim_size: the length of the tim IE
  * @frame: the template of the beacon frame
  */
+struct iwl_mac_beacon_cmd_v6 {
+       struct iwl_tx_cmd tx;
+       __le32 template_id;
+       __le32 tim_idx;
+       __le32 tim_size;
+       struct ieee80211_hdr frame[0];
+} __packed; /* BEACON_TEMPLATE_CMD_API_S_VER_6 */
+
+/**
+ * struct iwl_mac_beacon_cmd - beacon template command with offloaded CSA
+ * @tx: the tx commands associated with the beacon frame
+ * @template_id: currently equal to the mac context id of the coresponding
+ *  mac.
+ * @tim_idx: the offset of the tim IE in the beacon
+ * @tim_size: the length of the tim IE
+ * @ecsa_offset: offset to the ECSA IE if present
+ * @csa_offset: offset to the CSA IE if present
+ * @frame: the template of the beacon frame
+ */
 struct iwl_mac_beacon_cmd {
        struct iwl_tx_cmd tx;
        __le32 template_id;
        __le32 tim_idx;
        __le32 tim_size;
+       __le32 ecsa_offset;
+       __le32 csa_offset;
        struct ieee80211_hdr frame[0];
-} __packed;
+} __packed; /* BEACON_TEMPLATE_CMD_API_S_VER_7 */
 
 struct iwl_beacon_notif {
        struct iwl_mvm_tx_resp beacon_notify_hdr;
index 41b80ae..71076f0 100644 (file)
@@ -90,6 +90,7 @@ enum {
  * DQA queue numbers
  *
  * @IWL_MVM_DQA_CMD_QUEUE: a queue reserved for sending HCMDs to the FW
+ * @IWL_MVM_DQA_AUX_QUEUE: a queue reserved for aux frames
  * @IWL_MVM_DQA_P2P_DEVICE_QUEUE: a queue reserved for P2P device frames
  * @IWL_MVM_DQA_GCAST_QUEUE: a queue reserved for P2P GO/SoftAP GCAST frames
  * @IWL_MVM_DQA_BSS_CLIENT_QUEUE: a queue reserved for BSS activity, to ensure
@@ -108,6 +109,7 @@ enum {
  */
 enum iwl_mvm_dqa_txq {
        IWL_MVM_DQA_CMD_QUEUE = 0,
+       IWL_MVM_DQA_AUX_QUEUE = 1,
        IWL_MVM_DQA_P2P_DEVICE_QUEUE = 2,
        IWL_MVM_DQA_GCAST_QUEUE = 3,
        IWL_MVM_DQA_BSS_CLIENT_QUEUE = 4,
@@ -127,9 +129,6 @@ enum iwl_mvm_tx_fifo {
        IWL_MVM_TX_FIFO_CMD = 7,
 };
 
-#define IWL_MVM_STATION_COUNT  16
-
-#define IWL_MVM_TDLS_STA_COUNT 4
 
 /* commands */
 enum {
@@ -314,6 +313,7 @@ enum {
 enum iwl_mac_conf_subcmd_ids {
        LINK_QUALITY_MEASUREMENT_CMD = 0x1,
        LINK_QUALITY_MEASUREMENT_COMPLETE_NOTIF = 0xFE,
+       CHANNEL_SWITCH_NOA_NOTIF = 0xFF,
 };
 
 enum iwl_phy_ops_subcmd_ids {
@@ -329,6 +329,7 @@ enum iwl_system_subcmd_ids {
 };
 
 enum iwl_data_path_subcmd_ids {
+       DQA_ENABLE_CMD = 0x0,
        UPDATE_MU_GROUPS_CMD = 0x1,
        TRIGGER_RX_QUEUES_NOTIF_CMD = 0x2,
        MU_GROUP_MGMT_NOTIF = 0xFE,
@@ -358,6 +359,14 @@ struct iwl_cmd_response {
        __le32 status;
 };
 
+/*
+ * struct iwl_dqa_enable_cmd
+ * @cmd_queue: the TXQ number of the command queue
+ */
+struct iwl_dqa_enable_cmd {
+       __le32 cmd_queue;
+} __packed; /* DQA_CONTROL_CMD_API_S_VER_1 */
+
 /*
  * struct iwl_tx_ant_cfg_cmd
  * @valid: valid antenna configuration
@@ -732,7 +741,7 @@ enum iwl_time_event_type {
 
        /* P2P GO Events */
        TE_P2P_GO_ASSOC_PROT,
-       TE_P2P_GO_REPETITIVE_NOA,
+       TE_P2P_GO_REPETITIVET_NOA,
        TE_P2P_GO_CT_WINDOW,
 
        /* WiDi Sync Events */
@@ -2111,4 +2120,13 @@ struct iwl_link_qual_msrmnt_notif {
        __le32 reserved[3];
 } __packed; /* LQM_MEASUREMENT_COMPLETE_NTF_API_S_VER1 */
 
+/**
+ * Channel switch NOA notification
+ *
+ * @id_and_color: ID and color of the MAC
+ */
+struct iwl_channel_switch_noa_notif {
+       __le32 id_and_color;
+} __packed; /* CHANNEL_SWITCH_START_NTFY_API_S_VER_1 */
+
 #endif /* __fw_api_h__ */
index e1b6b2c..1abcabb 100644 (file)
@@ -288,7 +288,8 @@ static void iwl_mvm_dump_fifos(struct iwl_mvm *mvm,
                        fifo_hdr->fifo_num = cpu_to_le32(i);
 
                        /* Mark the number of TXF we're pulling now */
-                       iwl_trans_write_prph(mvm->trans, TXF_CPU2_NUM, i);
+                       iwl_trans_write_prph(mvm->trans, TXF_CPU2_NUM, i +
+                               ARRAY_SIZE(mvm->shared_mem_cfg.txfifo_size));
 
                        fifo_hdr->available_bytes =
                                cpu_to_le32(iwl_trans_read_prph(mvm->trans,
index 7057f35..7e0cdbf 100644 (file)
@@ -65,6 +65,7 @@
  *****************************************************************************/
 #include <net/mac80211.h>
 #include <linux/netdevice.h>
+#include <linux/acpi.h>
 
 #include "iwl-trans.h"
 #include "iwl-op-mode.h"
@@ -122,6 +123,9 @@ static int iwl_send_rss_cfg_cmd(struct iwl_mvm *mvm)
                             IWL_RSS_HASH_TYPE_IPV6_PAYLOAD,
        };
 
+       if (mvm->trans->num_rx_queues == 1)
+               return 0;
+
        /* Do not direct RSS traffic to Q 0 which is our fallback queue */
        for (i = 0; i < ARRAY_SIZE(cmd.indirection_table); i++)
                cmd.indirection_table[i] =
@@ -131,6 +135,23 @@ static int iwl_send_rss_cfg_cmd(struct iwl_mvm *mvm)
        return iwl_mvm_send_cmd_pdu(mvm, RSS_CONFIG_CMD, 0, sizeof(cmd), &cmd);
 }
 
+static int iwl_mvm_send_dqa_cmd(struct iwl_mvm *mvm)
+{
+       struct iwl_dqa_enable_cmd dqa_cmd = {
+               .cmd_queue = cpu_to_le32(IWL_MVM_DQA_CMD_QUEUE),
+       };
+       u32 cmd_id = iwl_cmd_id(DQA_ENABLE_CMD, DATA_PATH_GROUP, 0);
+       int ret;
+
+       ret = iwl_mvm_send_cmd_pdu(mvm, cmd_id, 0, sizeof(dqa_cmd), &dqa_cmd);
+       if (ret)
+               IWL_ERR(mvm, "Failed to send DQA enabling command: %d\n", ret);
+       else
+               IWL_DEBUG_FW(mvm, "Working in DQA mode\n");
+
+       return ret;
+}
+
 void iwl_free_fw_paging(struct iwl_mvm *mvm)
 {
        int i;
@@ -139,17 +160,21 @@ void iwl_free_fw_paging(struct iwl_mvm *mvm)
                return;
 
        for (i = 0; i < NUM_OF_FW_PAGING_BLOCKS; i++) {
-               if (!mvm->fw_paging_db[i].fw_paging_block) {
+               struct iwl_fw_paging *paging = &mvm->fw_paging_db[i];
+
+               if (!paging->fw_paging_block) {
                        IWL_DEBUG_FW(mvm,
                                     "Paging: block %d already freed, continue to next page\n",
                                     i);
 
                        continue;
                }
+               dma_unmap_page(mvm->trans->dev, paging->fw_paging_phys,
+                              paging->fw_paging_size, DMA_BIDIRECTIONAL);
 
-               __free_pages(mvm->fw_paging_db[i].fw_paging_block,
-                            get_order(mvm->fw_paging_db[i].fw_paging_size));
-               mvm->fw_paging_db[i].fw_paging_block = NULL;
+               __free_pages(paging->fw_paging_block,
+                            get_order(paging->fw_paging_size));
+               paging->fw_paging_block = NULL;
        }
        kfree(mvm->trans->paging_download_buf);
        mvm->trans->paging_download_buf = NULL;
@@ -882,6 +907,177 @@ static int iwl_mvm_config_ltr(struct iwl_mvm *mvm)
                                    sizeof(cmd), &cmd);
 }
 
+#define ACPI_WRDS_METHOD       "WRDS"
+#define ACPI_WRDS_WIFI         (0x07)
+#define ACPI_WRDS_TABLE_SIZE   10
+
+struct iwl_mvm_sar_table {
+       bool enabled;
+       u8 values[ACPI_WRDS_TABLE_SIZE];
+};
+
+#ifdef CONFIG_ACPI
+static int iwl_mvm_sar_get_wrds(struct iwl_mvm *mvm, union acpi_object *wrds,
+                               struct iwl_mvm_sar_table *sar_table)
+{
+       union acpi_object *data_pkg;
+       u32 i;
+
+       /* We need at least two packages, one for the revision and one
+        * for the data itself.  Also check that the revision is valid
+        * (i.e. it is an integer set to 0).
+       */
+       if (wrds->type != ACPI_TYPE_PACKAGE ||
+           wrds->package.count < 2 ||
+           wrds->package.elements[0].type != ACPI_TYPE_INTEGER ||
+           wrds->package.elements[0].integer.value != 0) {
+               IWL_DEBUG_RADIO(mvm, "Unsupported wrds structure\n");
+               return -EINVAL;
+       }
+
+       /* loop through all the packages to find the one for WiFi */
+       for (i = 1; i < wrds->package.count; i++) {
+               union acpi_object *domain;
+
+               data_pkg = &wrds->package.elements[i];
+
+               /* Skip anything that is not a package with the right
+                * amount of elements (i.e. domain_type,
+                * enabled/disabled plus the sar table size.
+                */
+               if (data_pkg->type != ACPI_TYPE_PACKAGE ||
+                   data_pkg->package.count != ACPI_WRDS_TABLE_SIZE + 2)
+                       continue;
+
+               domain = &data_pkg->package.elements[0];
+               if (domain->type == ACPI_TYPE_INTEGER &&
+                   domain->integer.value == ACPI_WRDS_WIFI)
+                       break;
+
+               data_pkg = NULL;
+       }
+
+       if (!data_pkg)
+               return -ENOENT;
+
+       if (data_pkg->package.elements[1].type != ACPI_TYPE_INTEGER)
+               return -EINVAL;
+
+       sar_table->enabled = !!(data_pkg->package.elements[1].integer.value);
+
+       for (i = 0; i < ACPI_WRDS_TABLE_SIZE; i++) {
+               union acpi_object *entry;
+
+               entry = &data_pkg->package.elements[i + 2];
+               if ((entry->type != ACPI_TYPE_INTEGER) ||
+                   (entry->integer.value > U8_MAX))
+                       return -EINVAL;
+
+               sar_table->values[i] = entry->integer.value;
+       }
+
+       return 0;
+}
+
+static int iwl_mvm_sar_get_table(struct iwl_mvm *mvm,
+                                struct iwl_mvm_sar_table *sar_table)
+{
+       acpi_handle root_handle;
+       acpi_handle handle;
+       struct acpi_buffer wrds = {ACPI_ALLOCATE_BUFFER, NULL};
+       acpi_status status;
+       int ret;
+
+       root_handle = ACPI_HANDLE(mvm->dev);
+       if (!root_handle) {
+               IWL_DEBUG_RADIO(mvm,
+                               "Could not retrieve root port ACPI handle\n");
+               return -ENOENT;
+       }
+
+       /* Get the method's handle */
+       status = acpi_get_handle(root_handle, (acpi_string)ACPI_WRDS_METHOD,
+                                &handle);
+       if (ACPI_FAILURE(status)) {
+               IWL_DEBUG_RADIO(mvm, "WRDS method not found\n");
+               return -ENOENT;
+       }
+
+       /* Call WRDS with no arguments */
+       status = acpi_evaluate_object(handle, NULL, NULL, &wrds);
+       if (ACPI_FAILURE(status)) {
+               IWL_DEBUG_RADIO(mvm, "WRDS invocation failed (0x%x)\n", status);
+               return -ENOENT;
+       }
+
+       ret = iwl_mvm_sar_get_wrds(mvm, wrds.pointer, sar_table);
+       kfree(wrds.pointer);
+
+       return ret;
+}
+#else /* CONFIG_ACPI */
+static int iwl_mvm_sar_get_table(struct iwl_mvm *mvm,
+                                struct iwl_mvm_sar_table *sar_table)
+{
+       return -ENOENT;
+}
+#endif /* CONFIG_ACPI */
+
+static int iwl_mvm_sar_init(struct iwl_mvm *mvm)
+{
+       struct iwl_mvm_sar_table sar_table;
+       struct iwl_dev_tx_power_cmd cmd = {
+               .v3.v2.set_mode = cpu_to_le32(IWL_TX_POWER_MODE_SET_CHAINS),
+       };
+       int ret, i, j, idx;
+       int len = sizeof(cmd);
+
+       /* we can't do anything with the table if the FW doesn't support it */
+       if (!fw_has_api(&mvm->fw->ucode_capa,
+                       IWL_UCODE_TLV_API_TX_POWER_CHAIN)) {
+               IWL_DEBUG_RADIO(mvm,
+                               "FW doesn't support per-chain TX power settings.\n");
+               return 0;
+       }
+
+       if (!fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_TX_POWER_ACK))
+               len = sizeof(cmd.v3);
+
+       ret = iwl_mvm_sar_get_table(mvm, &sar_table);
+       if (ret < 0) {
+               IWL_DEBUG_RADIO(mvm,
+                               "SAR BIOS table invalid or unavailable. (%d)\n",
+                               ret);
+               /* we don't fail if the table is not available */
+               return 0;
+       }
+
+       if (!sar_table.enabled)
+               return 0;
+
+       IWL_DEBUG_RADIO(mvm, "Sending REDUCE_TX_POWER_CMD per chain\n");
+
+       BUILD_BUG_ON(IWL_NUM_CHAIN_LIMITS * IWL_NUM_SUB_BANDS !=
+                    ACPI_WRDS_TABLE_SIZE);
+
+       for (i = 0; i < IWL_NUM_CHAIN_LIMITS; i++) {
+               IWL_DEBUG_RADIO(mvm, "  Chain[%d]:\n", i);
+               for (j = 0; j < IWL_NUM_SUB_BANDS; j++) {
+                       idx = (i * IWL_NUM_SUB_BANDS) + j;
+                       cmd.v3.per_chain_restriction[i][j] =
+                               cpu_to_le16(sar_table.values[idx]);
+                       IWL_DEBUG_RADIO(mvm, "    Band[%d] = %d * .125dBm\n",
+                                       j, sar_table.values[idx]);
+               }
+       }
+
+       ret = iwl_mvm_send_cmd_pdu(mvm, REDUCE_TX_POWER_CMD, 0, len, &cmd);
+       if (ret)
+               IWL_ERR(mvm, "failed to set per-chain TX power: %d\n", ret);
+
+       return ret;
+}
+
 int iwl_mvm_up(struct iwl_mvm *mvm)
 {
        int ret, i;
@@ -976,6 +1172,15 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
        /* reset quota debouncing buffer - 0xff will yield invalid data */
        memset(&mvm->last_quota_cmd, 0xff, sizeof(mvm->last_quota_cmd));
 
+       /* Enable DQA-mode if required */
+       if (iwl_mvm_is_dqa_supported(mvm)) {
+               ret = iwl_mvm_send_dqa_cmd(mvm);
+               if (ret)
+                       goto error;
+       } else {
+               IWL_DEBUG_FW(mvm, "Working in non-DQA mode\n");
+       }
+
        /* Add auxiliary station for scanning */
        ret = iwl_mvm_add_aux_sta(mvm);
        if (ret)
@@ -1048,6 +1253,10 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
        if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
                iwl_mvm_unref(mvm, IWL_MVM_REF_UCODE_DOWN);
 
+       ret = iwl_mvm_sar_init(mvm);
+       if (ret)
+               goto error;
+
        IWL_DEBUG_INFO(mvm, "RT uCode started.\n");
        return 0;
  error:
index 7aae068..69c42ce 100644 (file)
@@ -1006,7 +1006,7 @@ static int iwl_mvm_mac_ctxt_cmd_p2p_device(struct iwl_mvm *mvm,
 }
 
 static void iwl_mvm_mac_ctxt_set_tim(struct iwl_mvm *mvm,
-                                    struct iwl_mac_beacon_cmd *beacon_cmd,
+                                    struct iwl_mac_beacon_cmd_v6 *beacon_cmd,
                                     u8 *beacon, u32 frame_size)
 {
        u32 tim_idx;
@@ -1030,6 +1030,23 @@ static void iwl_mvm_mac_ctxt_set_tim(struct iwl_mvm *mvm,
        }
 }
 
+static u32 iwl_mvm_find_ie_offset(u8 *beacon, u8 eid, u32 frame_size)
+{
+       struct ieee80211_mgmt *mgmt = (void *)beacon;
+       const u8 *ie;
+
+       if (WARN_ON_ONCE(frame_size <= (mgmt->u.beacon.variable - beacon)))
+               return 0;
+
+       frame_size -= mgmt->u.beacon.variable - beacon;
+
+       ie = cfg80211_find_ie(eid, mgmt->u.beacon.variable, frame_size);
+       if (!ie)
+               return 0;
+
+       return ie - beacon;
+}
+
 static int iwl_mvm_mac_ctxt_send_beacon(struct iwl_mvm *mvm,
                                        struct ieee80211_vif *vif,
                                        struct sk_buff *beacon)
@@ -1039,7 +1056,10 @@ static int iwl_mvm_mac_ctxt_send_beacon(struct iwl_mvm *mvm,
                .id = BEACON_TEMPLATE_CMD,
                .flags = CMD_ASYNC,
        };
-       struct iwl_mac_beacon_cmd beacon_cmd = {};
+       union {
+               struct iwl_mac_beacon_cmd_v6 beacon_cmd_v6;
+               struct iwl_mac_beacon_cmd beacon_cmd;
+       } u = {};
        struct ieee80211_tx_info *info;
        u32 beacon_skb_len;
        u32 rate, tx_flags;
@@ -1051,18 +1071,18 @@ static int iwl_mvm_mac_ctxt_send_beacon(struct iwl_mvm *mvm,
 
        /* TODO: for now the beacon template id is set to be the mac context id.
         * Might be better to handle it as another resource ... */
-       beacon_cmd.template_id = cpu_to_le32((u32)mvmvif->id);
+       u.beacon_cmd_v6.template_id = cpu_to_le32((u32)mvmvif->id);
        info = IEEE80211_SKB_CB(beacon);
 
        /* Set up TX command fields */
-       beacon_cmd.tx.len = cpu_to_le16((u16)beacon_skb_len);
-       beacon_cmd.tx.sta_id = mvmvif->bcast_sta.sta_id;
-       beacon_cmd.tx.life_time = cpu_to_le32(TX_CMD_LIFE_TIME_INFINITE);
+       u.beacon_cmd_v6.tx.len = cpu_to_le16((u16)beacon_skb_len);
+       u.beacon_cmd_v6.tx.sta_id = mvmvif->bcast_sta.sta_id;
+       u.beacon_cmd_v6.tx.life_time = cpu_to_le32(TX_CMD_LIFE_TIME_INFINITE);
        tx_flags = TX_CMD_FLG_SEQ_CTL | TX_CMD_FLG_TSF;
        tx_flags |=
                iwl_mvm_bt_coex_tx_prio(mvm, (void *)beacon->data, info, 0) <<
                                                TX_CMD_FLG_BT_PRIO_POS;
-       beacon_cmd.tx.tx_flags = cpu_to_le32(tx_flags);
+       u.beacon_cmd_v6.tx.tx_flags = cpu_to_le32(tx_flags);
 
        if (!fw_has_capa(&mvm->fw->ucode_capa,
                         IWL_UCODE_TLV_CAPA_BEACON_ANT_SELECTION)) {
@@ -1071,7 +1091,7 @@ static int iwl_mvm_mac_ctxt_send_beacon(struct iwl_mvm *mvm,
                                             mvm->mgmt_last_antenna_idx);
        }
 
-       beacon_cmd.tx.rate_n_flags =
+       u.beacon_cmd_v6.tx.rate_n_flags =
                cpu_to_le32(BIT(mvm->mgmt_last_antenna_idx) <<
                            RATE_MCS_ANT_POS);
 
@@ -1079,20 +1099,37 @@ static int iwl_mvm_mac_ctxt_send_beacon(struct iwl_mvm *mvm,
                rate = IWL_FIRST_OFDM_RATE;
        } else {
                rate = IWL_FIRST_CCK_RATE;
-               beacon_cmd.tx.rate_n_flags |= cpu_to_le32(RATE_MCS_CCK_MSK);
+               u.beacon_cmd_v6.tx.rate_n_flags |=
+                                       cpu_to_le32(RATE_MCS_CCK_MSK);
        }
-       beacon_cmd.tx.rate_n_flags |=
+       u.beacon_cmd_v6.tx.rate_n_flags |=
                cpu_to_le32(iwl_mvm_mac80211_idx_to_hwrate(rate));
 
        /* Set up TX beacon command fields */
        if (vif->type == NL80211_IFTYPE_AP)
-               iwl_mvm_mac_ctxt_set_tim(mvm, &beacon_cmd,
+               iwl_mvm_mac_ctxt_set_tim(mvm, &u.beacon_cmd_v6,
                                         beacon->data,
                                         beacon_skb_len);
 
        /* Submit command */
-       cmd.len[0] = sizeof(beacon_cmd);
-       cmd.data[0] = &beacon_cmd;
+
+       if (fw_has_capa(&mvm->fw->ucode_capa,
+                       IWL_UCODE_TLV_CAPA_CSA_AND_TBTT_OFFLOAD)) {
+               u.beacon_cmd.csa_offset =
+                       cpu_to_le32(iwl_mvm_find_ie_offset(beacon->data,
+                                                   WLAN_EID_CHANNEL_SWITCH,
+                                                   beacon_skb_len));
+               u.beacon_cmd.ecsa_offset =
+                       cpu_to_le32(iwl_mvm_find_ie_offset(beacon->data,
+                                                   WLAN_EID_EXT_CHANSWITCH_ANN,
+                                                   beacon_skb_len));
+
+               cmd.len[0] = sizeof(u.beacon_cmd);
+       } else {
+               cmd.len[0] = sizeof(u.beacon_cmd_v6);
+       }
+
+       cmd.data[0] = &u;
        cmd.dataflags[0] = 0;
        cmd.len[1] = beacon_skb_len;
        cmd.data[1] = beacon->data;
@@ -1538,3 +1575,48 @@ void iwl_mvm_rx_stored_beacon_notif(struct iwl_mvm *mvm,
        /* pass it as regular rx to mac80211 */
        ieee80211_rx_napi(mvm->hw, NULL, skb, NULL);
 }
+
+void iwl_mvm_channel_switch_noa_notif(struct iwl_mvm *mvm,
+                                     struct iwl_rx_cmd_buffer *rxb)
+{
+       struct iwl_rx_packet *pkt = rxb_addr(rxb);
+       struct iwl_channel_switch_noa_notif *notif = (void *)pkt->data;
+       struct ieee80211_vif *csa_vif;
+       struct iwl_mvm_vif *mvmvif;
+       int len = iwl_rx_packet_payload_len(pkt);
+       u32 id_n_color;
+
+       if (WARN_ON_ONCE(len < sizeof(*notif)))
+               return;
+
+       rcu_read_lock();
+
+       csa_vif = rcu_dereference(mvm->csa_vif);
+       if (WARN_ON(!csa_vif || !csa_vif->csa_active))
+               goto out_unlock;
+
+       id_n_color = le32_to_cpu(notif->id_and_color);
+
+       mvmvif = iwl_mvm_vif_from_mac80211(csa_vif);
+       if (WARN(FW_CMD_ID_AND_COLOR(mvmvif->id, mvmvif->color) != id_n_color,
+                "channel switch noa notification on unexpected vif (csa_vif=%d, notif=%d)",
+                FW_CMD_ID_AND_COLOR(mvmvif->id, mvmvif->color), id_n_color))
+               goto out_unlock;
+
+       IWL_DEBUG_INFO(mvm, "Channel Switch Started Notification\n");
+
+       queue_delayed_work(system_wq, &mvm->cs_tx_unblock_dwork,
+                          msecs_to_jiffies(IWL_MVM_CS_UNBLOCK_TX_TIMEOUT *
+                                           csa_vif->bss_conf.beacon_int));
+
+       ieee80211_csa_finish(csa_vif);
+
+       rcu_read_unlock();
+
+       RCU_INIT_POINTER(mvm->csa_vif, NULL);
+
+       return;
+
+out_unlock:
+       rcu_read_unlock();
+}
index 18a8474..6d60645 100644 (file)
@@ -465,11 +465,20 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
        hw->uapsd_queues = IWL_MVM_UAPSD_QUEUES;
        hw->uapsd_max_sp_len = IWL_UAPSD_MAX_SP;
 
-       BUILD_BUG_ON(ARRAY_SIZE(mvm->ciphers) < ARRAY_SIZE(mvm_ciphers) + 2);
+       BUILD_BUG_ON(ARRAY_SIZE(mvm->ciphers) < ARRAY_SIZE(mvm_ciphers) + 4);
        memcpy(mvm->ciphers, mvm_ciphers, sizeof(mvm_ciphers));
        hw->wiphy->n_cipher_suites = ARRAY_SIZE(mvm_ciphers);
        hw->wiphy->cipher_suites = mvm->ciphers;
 
+       if (iwl_mvm_has_new_rx_api(mvm)) {
+               mvm->ciphers[hw->wiphy->n_cipher_suites] =
+                       WLAN_CIPHER_SUITE_GCMP;
+               hw->wiphy->n_cipher_suites++;
+               mvm->ciphers[hw->wiphy->n_cipher_suites] =
+                       WLAN_CIPHER_SUITE_GCMP_256;
+               hw->wiphy->n_cipher_suites++;
+       }
+
        /*
         * Enable 11w if advertised by firmware and software crypto
         * is not enabled (as the firmware will interpret some mgmt
@@ -485,10 +494,23 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
 
        /* currently FW API supports only one optional cipher scheme */
        if (mvm->fw->cs[0].cipher) {
+               const struct iwl_fw_cipher_scheme *fwcs = &mvm->fw->cs[0];
+               struct ieee80211_cipher_scheme *cs = &mvm->cs[0];
+
                mvm->hw->n_cipher_schemes = 1;
-               mvm->hw->cipher_schemes = &mvm->fw->cs[0];
-               mvm->ciphers[hw->wiphy->n_cipher_suites] =
-                       mvm->fw->cs[0].cipher;
+
+               cs->cipher = le32_to_cpu(fwcs->cipher);
+               cs->iftype = BIT(NL80211_IFTYPE_STATION);
+               cs->hdr_len = fwcs->hdr_len;
+               cs->pn_len = fwcs->pn_len;
+               cs->pn_off = fwcs->pn_off;
+               cs->key_idx_off = fwcs->key_idx_off;
+               cs->key_idx_mask = fwcs->key_idx_mask;
+               cs->key_idx_shift = fwcs->key_idx_shift;
+               cs->mic_len = fwcs->mic_len;
+
+               mvm->hw->cipher_schemes = mvm->cs;
+               mvm->ciphers[hw->wiphy->n_cipher_suites] = cs->cipher;
                hw->wiphy->n_cipher_suites++;
        }
 
@@ -1011,11 +1033,7 @@ static void iwl_mvm_restart_cleanup(struct iwl_mvm *mvm)
        memset(mvm->sta_deferred_frames, 0, sizeof(mvm->sta_deferred_frames));
        memset(mvm->tfd_drained, 0, sizeof(mvm->tfd_drained));
        memset(&mvm->last_bt_notif, 0, sizeof(mvm->last_bt_notif));
-       memset(&mvm->last_bt_notif_old, 0, sizeof(mvm->last_bt_notif_old));
        memset(&mvm->last_bt_ci_cmd, 0, sizeof(mvm->last_bt_ci_cmd));
-       memset(&mvm->last_bt_ci_cmd_old, 0, sizeof(mvm->last_bt_ci_cmd_old));
-       memset(&mvm->bt_ack_kill_msk, 0, sizeof(mvm->bt_ack_kill_msk));
-       memset(&mvm->bt_cts_kill_msk, 0, sizeof(mvm->bt_cts_kill_msk));
 
        ieee80211_wake_queues(mvm->hw);
 
@@ -1199,6 +1217,8 @@ static void iwl_mvm_mac_stop(struct ieee80211_hw *hw)
        flush_work(&mvm->async_handlers_wk);
        flush_work(&mvm->add_stream_wk);
        cancel_delayed_work_sync(&mvm->fw_dump_wk);
+       cancel_delayed_work_sync(&mvm->cs_tx_unblock_dwork);
+       cancel_delayed_work_sync(&mvm->scan_timeout_dwork);
        iwl_mvm_free_fw_dump_desc(mvm);
 
        mutex_lock(&mvm->mutex);
@@ -1230,18 +1250,20 @@ static int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
                                s16 tx_power)
 {
        struct iwl_dev_tx_power_cmd cmd = {
-               .v2.set_mode = cpu_to_le32(IWL_TX_POWER_MODE_SET_MAC),
-               .v2.mac_context_id =
+               .v3.v2.set_mode = cpu_to_le32(IWL_TX_POWER_MODE_SET_MAC),
+               .v3.v2.mac_context_id =
                        cpu_to_le32(iwl_mvm_vif_from_mac80211(vif)->id),
-               .v2.pwr_restriction = cpu_to_le16(8 * tx_power),
+               .v3.v2.pwr_restriction = cpu_to_le16(8 * tx_power),
        };
        int len = sizeof(cmd);
 
        if (tx_power == IWL_DEFAULT_MAX_TX_POWER)
-               cmd.v2.pwr_restriction = cpu_to_le16(IWL_DEV_MAX_TX_POWER);
+               cmd.v3.v2.pwr_restriction = cpu_to_le16(IWL_DEV_MAX_TX_POWER);
 
+       if (!fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_TX_POWER_ACK))
+               len = sizeof(cmd.v3);
        if (!fw_has_api(&mvm->fw->ucode_capa, IWL_UCODE_TLV_API_TX_POWER_CHAIN))
-               len = sizeof(cmd.v2);
+               len = sizeof(cmd.v3.v2);
 
        return iwl_mvm_send_cmd_pdu(mvm, REDUCE_TX_POWER_CMD, 0, len, &cmd);
 }
@@ -2360,7 +2382,7 @@ static void iwl_mvm_check_uapsd(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
        if (!(mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_UAPSD_SUPPORT))
                return;
 
-       if (vif->p2p && !iwl_mvm_is_p2p_standalone_uapsd_supported(mvm)) {
+       if (vif->p2p && !iwl_mvm_is_p2p_scm_uapsd_supported(mvm)) {
                vif->driver_flags &= ~IEEE80211_VIF_SUPPORTS_UAPSD;
                return;
        }
@@ -2719,6 +2741,8 @@ static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
                key->flags |= IEEE80211_KEY_FLAG_PUT_IV_SPACE;
                break;
        case WLAN_CIPHER_SUITE_CCMP:
+       case WLAN_CIPHER_SUITE_GCMP:
+       case WLAN_CIPHER_SUITE_GCMP_256:
                key->flags |= IEEE80211_KEY_FLAG_PUT_IV_SPACE;
                break;
        case WLAN_CIPHER_SUITE_AES_CMAC:
@@ -2780,7 +2804,8 @@ static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
                    sta && iwl_mvm_has_new_rx_api(mvm) &&
                    key->flags & IEEE80211_KEY_FLAG_PAIRWISE &&
                    (key->cipher == WLAN_CIPHER_SUITE_CCMP ||
-                    key->cipher == WLAN_CIPHER_SUITE_GCMP)) {
+                    key->cipher == WLAN_CIPHER_SUITE_GCMP ||
+                    key->cipher == WLAN_CIPHER_SUITE_GCMP_256)) {
                        struct ieee80211_key_seq seq;
                        int tid, q;
 
@@ -2834,7 +2859,8 @@ static int iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
                if (sta && iwl_mvm_has_new_rx_api(mvm) &&
                    key->flags & IEEE80211_KEY_FLAG_PAIRWISE &&
                    (key->cipher == WLAN_CIPHER_SUITE_CCMP ||
-                    key->cipher == WLAN_CIPHER_SUITE_GCMP)) {
+                    key->cipher == WLAN_CIPHER_SUITE_GCMP ||
+                    key->cipher == WLAN_CIPHER_SUITE_GCMP_256)) {
                        mvmsta = iwl_mvm_sta_from_mac80211(sta);
                        ptk_pn = rcu_dereference_protected(
                                                mvmsta->ptk_pn[keyidx],
@@ -3687,6 +3713,13 @@ static int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
                        goto out_unlock;
                }
 
+               /* we still didn't unblock tx. prevent new CS meanwhile */
+               if (rcu_dereference_protected(mvm->csa_tx_blocked_vif,
+                                             lockdep_is_held(&mvm->mutex))) {
+                       ret = -EBUSY;
+                       goto out_unlock;
+               }
+
                rcu_assign_pointer(mvm->csa_vif, vif);
 
                if (WARN_ONCE(mvmvif->csa_countdown,
@@ -3695,6 +3728,8 @@ static int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
                        goto out_unlock;
                }
 
+               mvmvif->csa_target_freq = chsw->chandef.chan->center_freq;
+
                break;
        case NL80211_IFTYPE_STATION:
                if (mvmvif->lqm_active)
@@ -3898,6 +3933,11 @@ static void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw,
        struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
        struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
 
+       if (mvmsta->avg_energy) {
+               sinfo->signal_avg = mvmsta->avg_energy;
+               sinfo->filled |= BIT(NL80211_STA_INFO_SIGNAL_AVG);
+       }
+
        if (!fw_has_capa(&mvm->fw->ucode_capa,
                         IWL_UCODE_TLV_CAPA_RADIO_BEACON_STATS))
                return;
index ffbd41d..b4fc86d 100644 (file)
@@ -452,6 +452,7 @@ struct iwl_mvm_vif {
        /* Indicates that CSA countdown may be started */
        bool csa_countdown;
        bool csa_failed;
+       u16 csa_target_freq;
 
        /* TCP Checksum Offload */
        netdev_features_t features;
@@ -686,13 +687,28 @@ struct iwl_mvm_baid_data {
  *     This is the state of a queue that has been fully configured (including
  *     SCD pointers, etc), has a specific RA/TID assigned to it, and can be
  *     used to send traffic.
+ * @IWL_MVM_QUEUE_SHARED: queue is shared, or in a process of becoming shared
+ *     This is a state in which a single queue serves more than one TID, all of
+ *     which are not aggregated. Note that the queue is only associated to one
+ *     RA.
+ * @IWL_MVM_QUEUE_INACTIVE: queue is allocated but no traffic on it
+ *     This is a state of a queue that has had traffic on it, but during the
+ *     last %IWL_MVM_DQA_QUEUE_TIMEOUT time period there has been no traffic on
+ *     it. In this state, when a new queue is needed to be allocated but no
+ *     such free queue exists, an inactive queue might be freed and given to
+ *     the new RA/TID.
  */
 enum iwl_mvm_queue_status {
        IWL_MVM_QUEUE_FREE,
        IWL_MVM_QUEUE_RESERVED,
        IWL_MVM_QUEUE_READY,
+       IWL_MVM_QUEUE_SHARED,
+       IWL_MVM_QUEUE_INACTIVE,
 };
 
+#define IWL_MVM_DQA_QUEUE_TIMEOUT      (5 * HZ)
+#define IWL_MVM_NUM_CIPHERS             8
+
 struct iwl_mvm {
        /* for logger access */
        struct device *dev;
@@ -731,6 +747,7 @@ struct iwl_mvm {
        struct iwl_sf_region sf_space;
 
        u32 ampdu_ref;
+       bool ampdu_toggle;
 
        struct iwl_notif_wait_data notif_wait;
 
@@ -748,11 +765,16 @@ struct iwl_mvm {
                u32 hw_queue_to_mac80211;
                u8 hw_queue_refcount;
                u8 ra_sta_id; /* The RA this queue is mapped to, if exists */
+               bool reserved; /* Is this the TXQ reserved for a STA */
+               u8 mac80211_ac; /* The mac80211 AC this queue is mapped to */
                u16 tid_bitmap; /* Bitmap of the TIDs mapped to this queue */
+               /* Timestamp for inactivation per TID of this queue */
+               unsigned long last_frame_time[IWL_MAX_TID_COUNT + 1];
                enum iwl_mvm_queue_status status;
        } queue_info[IWL_MAX_HW_QUEUES];
        spinlock_t queue_info_lock; /* For syncing queue mgmt operations */
        struct work_struct add_stream_wk; /* To add streams to queues */
+
        atomic_t mac80211_queue_stop_count[IEEE80211_MAX_QUEUES];
 
        const char *nvm_file_name;
@@ -787,7 +809,7 @@ struct iwl_mvm {
        struct iwl_mcast_filter_cmd *mcast_filter_cmd;
        enum iwl_mvm_scan_type scan_type;
        enum iwl_mvm_sched_scan_pass_all_states sched_scan_pass_all;
-       struct timer_list scan_timer;
+       struct delayed_work scan_timeout_dwork;
 
        /* max number of simultaneous scans the FW supports */
        unsigned int max_scans;
@@ -910,11 +932,6 @@ struct iwl_mvm {
        wait_queue_head_t d0i3_exit_waitq;
 
        /* BT-Coex */
-       u8 bt_ack_kill_msk[NUM_PHY_CTX];
-       u8 bt_cts_kill_msk[NUM_PHY_CTX];
-
-       struct iwl_bt_coex_profile_notif_old last_bt_notif_old;
-       struct iwl_bt_coex_ci_cmd_old last_bt_ci_cmd_old;
        struct iwl_bt_coex_profile_notif last_bt_notif;
        struct iwl_bt_coex_ci_cmd last_bt_ci_cmd;
 
@@ -994,7 +1011,8 @@ struct iwl_mvm {
 
        struct iwl_mvm_shared_mem_cfg shared_mem_cfg;
 
-       u32 ciphers[6];
+       u32 ciphers[IWL_MVM_NUM_CIPHERS];
+       struct ieee80211_cipher_scheme cs[IWL_UCODE_MAX_CS];
        struct iwl_mvm_tof_data tof_data;
 
        struct ieee80211_vif *nan_vif;
@@ -1006,6 +1024,8 @@ struct iwl_mvm {
         * clients.
         */
        bool drop_bcn_ap_mode;
+
+       struct delayed_work cs_tx_unblock_dwork;
 };
 
 /* Extract MVM priv from op_mode and _hw */
@@ -1158,10 +1178,10 @@ static inline bool iwl_mvm_is_mplut_supported(struct iwl_mvm *mvm)
 }
 
 static inline
-bool iwl_mvm_is_p2p_standalone_uapsd_supported(struct iwl_mvm *mvm)
+bool iwl_mvm_is_p2p_scm_uapsd_supported(struct iwl_mvm *mvm)
 {
        return fw_has_capa(&mvm->fw->ucode_capa,
-                          IWL_UCODE_TLV_CAPA_P2P_STANDALONE_UAPSD) &&
+                          IWL_UCODE_TLV_CAPA_P2P_SCM_UAPSD) &&
                !(iwlwifi_mod_params.uapsd_disable &
                  IWL_DISABLE_UAPSD_P2P_CLIENT);
 }
@@ -1321,7 +1341,6 @@ bool iwl_mvm_bcast_filter_build_cmd(struct iwl_mvm *mvm,
 void iwl_mvm_rx_rx_phy_cmd(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb);
 void iwl_mvm_rx_rx_mpdu(struct iwl_mvm *mvm, struct napi_struct *napi,
                        struct iwl_rx_cmd_buffer *rxb);
-void iwl_mvm_rx_phy_cmd_mq(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb);
 void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
                        struct iwl_rx_cmd_buffer *rxb, int queue);
 void iwl_mvm_rx_frame_release(struct iwl_mvm *mvm, struct napi_struct *napi,
@@ -1381,6 +1400,8 @@ void iwl_mvm_mac_ctxt_recalc_tsf_id(struct iwl_mvm *mvm,
                                    struct ieee80211_vif *vif);
 unsigned long iwl_mvm_get_used_hw_queues(struct iwl_mvm *mvm,
                                         struct ieee80211_vif *exclude_vif);
+void iwl_mvm_channel_switch_noa_notif(struct iwl_mvm *mvm,
+                                     struct iwl_rx_cmd_buffer *rxb);
 /* Bindings */
 int iwl_mvm_binding_add_vif(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
 int iwl_mvm_binding_remove_vif(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
@@ -1397,7 +1418,7 @@ int iwl_mvm_scan_size(struct iwl_mvm *mvm);
 int iwl_mvm_scan_stop(struct iwl_mvm *mvm, int type, bool notify);
 int iwl_mvm_max_scan_ie_len(struct iwl_mvm *mvm);
 void iwl_mvm_report_scan_aborted(struct iwl_mvm *mvm);
-void iwl_mvm_scan_timeout(unsigned long data);
+void iwl_mvm_scan_timeout_wk(struct work_struct *work);
 
 /* Scheduled scan */
 void iwl_mvm_rx_lmac_scan_complete_notif(struct iwl_mvm *mvm,
@@ -1613,7 +1634,7 @@ void iwl_mvm_enable_txq(struct iwl_mvm *mvm, int queue, int mac80211_queue,
  */
 void iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, int mac80211_queue,
                         u8 tid, u8 flags);
-int iwl_mvm_find_free_queue(struct iwl_mvm *mvm, u8 minq, u8 maxq);
+int iwl_mvm_find_free_queue(struct iwl_mvm *mvm, u8 sta_id, u8 minq, u8 maxq);
 
 /* Return a bitmask with all the hw supported queues, except for the
  * command queue, which can't be flushed.
@@ -1720,6 +1741,8 @@ void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
 void iwl_mvm_reorder_timer_expired(unsigned long data);
 struct ieee80211_vif *iwl_mvm_get_bss_vif(struct iwl_mvm *mvm);
 
+void iwl_mvm_inactivity_check(struct iwl_mvm *mvm);
+
 void iwl_mvm_nic_restart(struct iwl_mvm *mvm, bool fw_error);
 unsigned int iwl_mvm_get_wd_timeout(struct iwl_mvm *mvm,
                                    struct ieee80211_vif *vif,
index 25a9840..7a686f6 100644 (file)
@@ -66,7 +66,6 @@
  *****************************************************************************/
 #include <linux/firmware.h>
 #include <linux/rtnetlink.h>
-#include <linux/pci.h>
 #include <linux/acpi.h>
 #include "iwl-trans.h"
 #include "iwl-csr.h"
@@ -667,8 +666,7 @@ iwl_mvm_update_mcc(struct iwl_mvm *mvm, const char *alpha2,
                .mcc = cpu_to_le16(alpha2[0] << 8 | alpha2[1]),
                .source_id = (u8)src_id,
        };
-       struct iwl_mcc_update_resp *mcc_resp, *resp_cp = NULL;
-       struct iwl_mcc_update_resp_v1 *mcc_resp_v1 = NULL;
+       struct iwl_mcc_update_resp *resp_cp;
        struct iwl_rx_packet *pkt;
        struct iwl_host_cmd cmd = {
                .id = MCC_UPDATE_CMD,
@@ -701,34 +699,36 @@ iwl_mvm_update_mcc(struct iwl_mvm *mvm, const char *alpha2,
 
        /* Extract MCC response */
        if (resp_v2) {
-               mcc_resp = (void *)pkt->data;
+               struct iwl_mcc_update_resp *mcc_resp = (void *)pkt->data;
+
                n_channels =  __le32_to_cpu(mcc_resp->n_channels);
+               resp_len = sizeof(struct iwl_mcc_update_resp) +
+                          n_channels * sizeof(__le32);
+               resp_cp = kmemdup(mcc_resp, resp_len, GFP_KERNEL);
        } else {
-               mcc_resp_v1 = (void *)pkt->data;
+               struct iwl_mcc_update_resp_v1 *mcc_resp_v1 = (void *)pkt->data;
+
                n_channels =  __le32_to_cpu(mcc_resp_v1->n_channels);
+               resp_len = sizeof(struct iwl_mcc_update_resp) +
+                          n_channels * sizeof(__le32);
+               resp_cp = kzalloc(resp_len, GFP_KERNEL);
+
+               if (resp_cp) {
+                       resp_cp->status = mcc_resp_v1->status;
+                       resp_cp->mcc = mcc_resp_v1->mcc;
+                       resp_cp->cap = mcc_resp_v1->cap;
+                       resp_cp->source_id = mcc_resp_v1->source_id;
+                       resp_cp->n_channels = mcc_resp_v1->n_channels;
+                       memcpy(resp_cp->channels, mcc_resp_v1->channels,
+                              n_channels * sizeof(__le32));
+               }
        }
 
-       resp_len = sizeof(struct iwl_mcc_update_resp) + n_channels *
-               sizeof(__le32);
-
-       resp_cp = kzalloc(resp_len, GFP_KERNEL);
        if (!resp_cp) {
                ret = -ENOMEM;
                goto exit;
        }
 
-       if (resp_v2) {
-               memcpy(resp_cp, mcc_resp, resp_len);
-       } else {
-               resp_cp->status = mcc_resp_v1->status;
-               resp_cp->mcc = mcc_resp_v1->mcc;
-               resp_cp->cap = mcc_resp_v1->cap;
-               resp_cp->source_id = mcc_resp_v1->source_id;
-               resp_cp->n_channels = mcc_resp_v1->n_channels;
-               memcpy(resp_cp->channels, mcc_resp_v1->channels,
-                      n_channels * sizeof(__le32));
-       }
-
        status = le32_to_cpu(resp_cp->status);
 
        mcc = le16_to_cpu(resp_cp->mcc);
@@ -802,9 +802,8 @@ static int iwl_mvm_get_bios_mcc(struct iwl_mvm *mvm, char *mcc)
        struct acpi_buffer wrdd = {ACPI_ALLOCATE_BUFFER, NULL};
        acpi_status status;
        u32 mcc_val;
-       struct pci_dev *pdev = to_pci_dev(mvm->dev);
 
-       root_handle = ACPI_HANDLE(&pdev->dev);
+       root_handle = ACPI_HANDLE(mvm->dev);
        if (!root_handle) {
                IWL_DEBUG_LAR(mvm,
                              "Could not retrieve root port ACPI handle\n");
index a68054f..55d9096 100644 (file)
@@ -431,6 +431,7 @@ static const struct iwl_hcmd_names iwl_mvm_system_names[] = {
 static const struct iwl_hcmd_names iwl_mvm_mac_conf_names[] = {
        HCMD_NAME(LINK_QUALITY_MEASUREMENT_CMD),
        HCMD_NAME(LINK_QUALITY_MEASUREMENT_COMPLETE_NOTIF),
+       HCMD_NAME(CHANNEL_SWITCH_NOA_NOTIF),
 };
 
 /* Please keep this array *SORTED* by hex value.
@@ -494,6 +495,29 @@ static u32 calc_min_backoff(struct iwl_trans *trans, const struct iwl_cfg *cfg)
 
 static void iwl_mvm_fw_error_dump_wk(struct work_struct *work);
 
+static void iwl_mvm_tx_unblock_dwork(struct work_struct *work)
+{
+       struct iwl_mvm *mvm =
+               container_of(work, struct iwl_mvm, cs_tx_unblock_dwork.work);
+       struct ieee80211_vif *tx_blocked_vif;
+       struct iwl_mvm_vif *mvmvif;
+
+       mutex_lock(&mvm->mutex);
+
+       tx_blocked_vif =
+               rcu_dereference_protected(mvm->csa_tx_blocked_vif,
+                                         lockdep_is_held(&mvm->mutex));
+
+       if (!tx_blocked_vif)
+               goto unlock;
+
+       mvmvif = iwl_mvm_vif_from_mac80211(tx_blocked_vif);
+       iwl_mvm_modify_all_sta_disable_tx(mvm, mvmvif, false);
+       RCU_INIT_POINTER(mvm->csa_tx_blocked_vif, NULL);
+unlock:
+       mutex_unlock(&mvm->mutex);
+}
+
 static struct iwl_op_mode *
 iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
                      const struct iwl_fw *fw, struct dentry *dbgfs_dir)
@@ -553,18 +577,21 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
 
        mvm->restart_fw = iwlwifi_mod_params.restart_fw ? -1 : 0;
 
-       mvm->aux_queue = 15;
        if (!iwl_mvm_is_dqa_supported(mvm)) {
-               mvm->first_agg_queue = 16;
                mvm->last_agg_queue = mvm->cfg->base_params->num_of_queues - 1;
+
+               if (mvm->cfg->base_params->num_of_queues == 16) {
+                       mvm->aux_queue = 11;
+                       mvm->first_agg_queue = 12;
+               } else {
+                       mvm->aux_queue = 15;
+                       mvm->first_agg_queue = 16;
+               }
        } else {
+               mvm->aux_queue = IWL_MVM_DQA_AUX_QUEUE;
                mvm->first_agg_queue = IWL_MVM_DQA_MIN_DATA_QUEUE;
                mvm->last_agg_queue = IWL_MVM_DQA_MAX_DATA_QUEUE;
        }
-       if (mvm->cfg->base_params->num_of_queues == 16) {
-               mvm->aux_queue = 11;
-               mvm->first_agg_queue = 12;
-       }
        mvm->sf_state = SF_UNINIT;
        mvm->cur_ucode = IWL_UCODE_INIT;
        mvm->drop_bcn_ap_mode = true;
@@ -584,6 +611,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
        INIT_WORK(&mvm->d0i3_exit_work, iwl_mvm_d0i3_exit_work);
        INIT_DELAYED_WORK(&mvm->fw_dump_wk, iwl_mvm_fw_error_dump_wk);
        INIT_DELAYED_WORK(&mvm->tdls_cs.dwork, iwl_mvm_tdls_ch_switch_work);
+       INIT_DELAYED_WORK(&mvm->scan_timeout_dwork, iwl_mvm_scan_timeout_wk);
        INIT_WORK(&mvm->add_stream_wk, iwl_mvm_add_new_dqa_stream_wk);
 
        spin_lock_init(&mvm->d0i3_tx_lock);
@@ -595,6 +623,8 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
 
        SET_IEEE80211_DEV(mvm->hw, mvm->trans->dev);
 
+       INIT_DELAYED_WORK(&mvm->cs_tx_unblock_dwork, iwl_mvm_tx_unblock_dwork);
+
        /*
         * Populate the state variables that the transport layer needs
         * to know about.
@@ -603,6 +633,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
        trans_cfg.no_reclaim_cmds = no_reclaim_cmds;
        trans_cfg.n_no_reclaim_cmds = ARRAY_SIZE(no_reclaim_cmds);
        switch (iwlwifi_mod_params.amsdu_size) {
+       case IWL_AMSDU_DEF:
        case IWL_AMSDU_4K:
                trans_cfg.rx_buf_size = IWL_AMSDU_4K;
                break;
@@ -617,6 +648,10 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
                       iwlwifi_mod_params.amsdu_size);
                trans_cfg.rx_buf_size = IWL_AMSDU_4K;
        }
+
+       /* the hardware splits the A-MSDU */
+       if (mvm->cfg->mq_rx_supported)
+               trans_cfg.rx_buf_size = IWL_AMSDU_4K;
        trans_cfg.wide_cmd_header = fw_has_api(&mvm->fw->ucode_capa,
                                               IWL_UCODE_TLV_API_WIDE_CMD_HDR);
 
@@ -633,6 +668,9 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
        trans_cfg.cmd_fifo = IWL_MVM_TX_FIFO_CMD;
        trans_cfg.scd_set_active = true;
 
+       trans_cfg.cb_data_offs = offsetof(struct ieee80211_tx_info,
+                                         driver_data[2]);
+
        trans_cfg.sdio_adma_addr = fw->sdio_adma_addr;
        trans_cfg.sw_csum_tx = IWL_MVM_SW_TX_CSUM_OFFLOAD;
 
@@ -735,9 +773,6 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
 
        iwl_mvm_tof_init(mvm);
 
-       setup_timer(&mvm->scan_timer, iwl_mvm_scan_timeout,
-                   (unsigned long)mvm);
-
        return op_mode;
 
  out_unregister:
@@ -791,8 +826,6 @@ static void iwl_op_mode_mvm_stop(struct iwl_op_mode *op_mode)
 
        iwl_mvm_tof_clean(mvm);
 
-       del_timer_sync(&mvm->scan_timer);
-
        mutex_destroy(&mvm->mutex);
        mutex_destroy(&mvm->d0i3_suspend_mutex);
 
@@ -936,8 +969,6 @@ static void iwl_mvm_rx(struct iwl_op_mode *op_mode,
 
        if (likely(pkt->hdr.cmd == REPLY_RX_MPDU_CMD))
                iwl_mvm_rx_rx_mpdu(mvm, napi, rxb);
-       else if (pkt->hdr.cmd == FRAME_RELEASE)
-               iwl_mvm_rx_frame_release(mvm, napi, rxb, 0);
        else if (pkt->hdr.cmd == REPLY_RX_PHY_CMD)
                iwl_mvm_rx_rx_phy_cmd(mvm, rxb);
        else
@@ -953,11 +984,11 @@ static void iwl_mvm_rx_mq(struct iwl_op_mode *op_mode,
 
        if (likely(pkt->hdr.cmd == REPLY_RX_MPDU_CMD))
                iwl_mvm_rx_mpdu_mq(mvm, napi, rxb, 0);
-       else if (pkt->hdr.cmd == REPLY_RX_PHY_CMD)
-               iwl_mvm_rx_phy_cmd_mq(mvm, rxb);
        else if (unlikely(pkt->hdr.group_id == DATA_PATH_GROUP &&
                          pkt->hdr.cmd == RX_QUEUES_NOTIFICATION))
                iwl_mvm_rx_queue_notif(mvm, rxb, 0);
+       else if (pkt->hdr.cmd == FRAME_RELEASE)
+               iwl_mvm_rx_frame_release(mvm, napi, rxb, 0);
        else
                iwl_mvm_rx_common(mvm, rxb, pkt);
 }
index 7b1f6ad..ff85865 100644 (file)
@@ -308,7 +308,7 @@ static bool iwl_mvm_power_allow_uapsd(struct iwl_mvm *mvm,
                /* Allow U-APSD only if p2p is stand alone */
                bool is_p2p_standalone = true;
 
-               if (!iwl_mvm_is_p2p_standalone_uapsd_supported(mvm))
+               if (!iwl_mvm_is_p2p_scm_uapsd_supported(mvm))
                        return false;
 
                ieee80211_iterate_active_interfaces_atomic(mvm->hw,
index 81dd2f6..227c5ed 100644 (file)
@@ -211,6 +211,9 @@ static bool rs_sgi_allow(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
        if (is_ht80(rate) && (vht_cap->cap &
                             IEEE80211_VHT_CAP_SHORT_GI_80))
                return true;
+       if (is_ht160(rate) && (vht_cap->cap &
+                            IEEE80211_VHT_CAP_SHORT_GI_160))
+               return true;
 
        return false;
 }
@@ -399,7 +402,7 @@ static int iwl_hwrate_to_plcp_idx(u32 rate_n_flags)
 static void rs_rate_scale_perform(struct iwl_mvm *mvm,
                                  struct ieee80211_sta *sta,
                                  struct iwl_lq_sta *lq_sta,
-                                 int tid);
+                                 int tid, bool ndp);
 static void rs_fill_lq_cmd(struct iwl_mvm *mvm,
                           struct ieee80211_sta *sta,
                           struct iwl_lq_sta *lq_sta,
@@ -445,6 +448,13 @@ static const u16 expected_tpt_siso_80MHz[4][IWL_RATE_COUNT] = {
        {0, 0, 0, 0, 241, 0, 475, 701, 921, 1343, 1741, 1931, 2117, 2468, 2691},
 };
 
+static const u16 expected_tpt_siso_160MHz[4][IWL_RATE_COUNT] = {
+       {0, 0, 0, 0, 191, 0, 244, 288,  298,  308,  313,  318,  323,  328,  330},
+       {0, 0, 0, 0, 200, 0, 251, 293,  302,  312,  317,  322,  327,  332,  334},
+       {0, 0, 0, 0, 439, 0, 875, 1307, 1736, 2584, 3419, 3831, 4240, 5049, 5581},
+       {0, 0, 0, 0, 488, 0, 972, 1451, 1925, 2864, 3785, 4240, 4691, 5581, 6165},
+};
+
 static const u16 expected_tpt_mimo2_20MHz[4][IWL_RATE_COUNT] = {
        {0, 0, 0, 0,  74, 0, 123, 155, 179, 213, 235, 243, 250,  261, 0},
        {0, 0, 0, 0,  81, 0, 131, 164, 187, 221, 242, 250, 256,  267, 0},
@@ -466,6 +476,13 @@ static const u16 expected_tpt_mimo2_80MHz[4][IWL_RATE_COUNT] = {
        {0, 0, 0, 0, 474, 0, 920, 1338, 1732, 2464, 3116, 3418, 3705, 4225, 4545},
 };
 
+static const u16 expected_tpt_mimo2_160MHz[4][IWL_RATE_COUNT] = {
+       {0, 0, 0, 0, 240, 0, 278,  308,  313,  319,  322,  324,  328,  330,   334},
+       {0, 0, 0, 0, 247, 0, 282,  310,  315,  320,  323,  325,  329,  332,   338},
+       {0, 0, 0, 0, 875, 0, 1735, 2582, 3414, 5043, 6619, 7389, 8147, 9629,  10592},
+       {0, 0, 0, 0, 971, 0, 1925, 2861, 3779, 5574, 7304, 8147, 8976, 10592, 11640},
+};
+
 /* mbps, mcs */
 static const struct iwl_rate_mcs_info iwl_rate_mcs[IWL_RATE_COUNT] = {
        {  "1", "BPSK DSSS"},
@@ -901,7 +918,6 @@ static int rs_rate_from_ucode_rate(const u32 ucode_rate,
                }
        }
 
-       WARN_ON_ONCE(rate->bw == RATE_MCS_CHAN_WIDTH_160);
        WARN_ON_ONCE(rate->bw == RATE_MCS_CHAN_WIDTH_80 &&
                     !is_vht(rate));
 
@@ -1161,7 +1177,7 @@ static u8 rs_get_tid(struct ieee80211_hdr *hdr)
 }
 
 void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
-                         int tid, struct ieee80211_tx_info *info)
+                         int tid, struct ieee80211_tx_info *info, bool ndp)
 {
        int legacy_success;
        int retries;
@@ -1384,7 +1400,7 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
 done:
        /* See if there's a better rate or modulation mode to try. */
        if (sta->supp_rates[info->band])
-               rs_rate_scale_perform(mvm, sta, lq_sta, tid);
+               rs_rate_scale_perform(mvm, sta, lq_sta, tid, ndp);
 }
 
 /*
@@ -1407,7 +1423,8 @@ static void rs_mac80211_tx_status(void *mvm_r,
            info->flags & IEEE80211_TX_CTL_NO_ACK)
                return;
 
-       iwl_mvm_rs_tx_status(mvm, sta, rs_get_tid(hdr), info);
+       iwl_mvm_rs_tx_status(mvm, sta, rs_get_tid(hdr), info,
+                            ieee80211_is_qos_nullfunc(hdr->frame_control));
 }
 
 /*
@@ -1494,6 +1511,9 @@ static const u16 *rs_get_expected_tpt_table(struct iwl_lq_sta *lq_sta,
                case RATE_MCS_CHAN_WIDTH_80:
                        ht_tbl_pointer = expected_tpt_siso_80MHz;
                        break;
+               case RATE_MCS_CHAN_WIDTH_160:
+                       ht_tbl_pointer = expected_tpt_siso_160MHz;
+                       break;
                default:
                        WARN_ON_ONCE(1);
                }
@@ -1508,6 +1528,9 @@ static const u16 *rs_get_expected_tpt_table(struct iwl_lq_sta *lq_sta,
                case RATE_MCS_CHAN_WIDTH_80:
                        ht_tbl_pointer = expected_tpt_mimo2_80MHz;
                        break;
+               case RATE_MCS_CHAN_WIDTH_160:
+                       ht_tbl_pointer = expected_tpt_mimo2_160MHz;
+                       break;
                default:
                        WARN_ON_ONCE(1);
                }
@@ -1582,12 +1605,17 @@ static s32 rs_get_best_rate(struct iwl_mvm *mvm,
 
 static u32 rs_bw_from_sta_bw(struct ieee80211_sta *sta)
 {
-       if (sta->bandwidth >= IEEE80211_STA_RX_BW_80)
+       switch (sta->bandwidth) {
+       case IEEE80211_STA_RX_BW_160:
+               return RATE_MCS_CHAN_WIDTH_160;
+       case IEEE80211_STA_RX_BW_80:
                return RATE_MCS_CHAN_WIDTH_80;
-       else if (sta->bandwidth >= IEEE80211_STA_RX_BW_40)
+       case IEEE80211_STA_RX_BW_40:
                return RATE_MCS_CHAN_WIDTH_40;
-
-       return RATE_MCS_CHAN_WIDTH_20;
+       case IEEE80211_STA_RX_BW_20:
+       default:
+               return RATE_MCS_CHAN_WIDTH_20;
+       }
 }
 
 /*
@@ -2213,7 +2241,7 @@ static bool rs_tpc_perform(struct iwl_mvm *mvm,
 static void rs_rate_scale_perform(struct iwl_mvm *mvm,
                                  struct ieee80211_sta *sta,
                                  struct iwl_lq_sta *lq_sta,
-                                 int tid)
+                                 int tid, bool ndp)
 {
        int low = IWL_RATE_INVALID;
        int high = IWL_RATE_INVALID;
@@ -2512,7 +2540,7 @@ lq_update:
                            (lq_sta->tx_agg_tid_en & (1 << tid)) &&
                            (tid != IWL_MAX_TID_COUNT)) {
                                tid_data = &sta_priv->tid_data[tid];
-                               if (tid_data->state == IWL_AGG_OFF) {
+                               if (tid_data->state == IWL_AGG_OFF && !ndp) {
                                        IWL_DEBUG_RATE(mvm,
                                                       "try to aggregate tid %d\n",
                                                       tid);
@@ -2565,6 +2593,9 @@ static const struct rs_init_rate_info rs_optimal_rates_ht[] = {
        { S8_MIN, IWL_RATE_MCS_0_INDEX},
 };
 
+/* MCS index 9 is not valid for 20MHz VHT channel width,
+ * but is ok for 40, 80 and 160MHz channels.
+ */
 static const struct rs_init_rate_info rs_optimal_rates_vht_20mhz[] = {
        { -60, IWL_RATE_MCS_8_INDEX },
        { -64, IWL_RATE_MCS_7_INDEX },
@@ -2577,7 +2608,7 @@ static const struct rs_init_rate_info rs_optimal_rates_vht_20mhz[] = {
        { S8_MIN, IWL_RATE_MCS_0_INDEX},
 };
 
-static const struct rs_init_rate_info rs_optimal_rates_vht_40_80mhz[] = {
+static const struct rs_init_rate_info rs_optimal_rates_vht[] = {
        { -60, IWL_RATE_MCS_9_INDEX },
        { -64, IWL_RATE_MCS_8_INDEX },
        { -68, IWL_RATE_MCS_7_INDEX },
@@ -2640,9 +2671,9 @@ static void rs_init_optimal_rate(struct iwl_mvm *mvm,
                        lq_sta->optimal_nentries =
                                ARRAY_SIZE(rs_optimal_rates_vht_20mhz);
                } else {
-                       lq_sta->optimal_rates = rs_optimal_rates_vht_40_80mhz;
+                       lq_sta->optimal_rates = rs_optimal_rates_vht;
                        lq_sta->optimal_nentries =
-                               ARRAY_SIZE(rs_optimal_rates_vht_40_80mhz);
+                               ARRAY_SIZE(rs_optimal_rates_vht);
                }
        } else if (is_ht(rate)) {
                lq_sta->optimal_rates = rs_optimal_rates_ht;
@@ -2734,23 +2765,25 @@ static void rs_get_initial_rate(struct iwl_mvm *mvm,
         */
        if (sta->vht_cap.vht_supported &&
            best_rssi > IWL_RS_LOW_RSSI_THRESHOLD) {
-               if (sta->bandwidth >= IEEE80211_STA_RX_BW_40) {
-                       initial_rates = rs_optimal_rates_vht_40_80mhz;
-                       nentries = ARRAY_SIZE(rs_optimal_rates_vht_40_80mhz);
-                       if (sta->bandwidth >= IEEE80211_STA_RX_BW_80)
-                               rate->bw = RATE_MCS_CHAN_WIDTH_80;
-                       else
-                               rate->bw = RATE_MCS_CHAN_WIDTH_40;
-               } else if (sta->bandwidth == IEEE80211_STA_RX_BW_20) {
+               switch (sta->bandwidth) {
+               case IEEE80211_STA_RX_BW_160:
+               case IEEE80211_STA_RX_BW_80:
+               case IEEE80211_STA_RX_BW_40:
+                       initial_rates = rs_optimal_rates_vht;
+                       nentries = ARRAY_SIZE(rs_optimal_rates_vht);
+                       break;
+               case IEEE80211_STA_RX_BW_20:
                        initial_rates = rs_optimal_rates_vht_20mhz;
                        nentries = ARRAY_SIZE(rs_optimal_rates_vht_20mhz);
-                       rate->bw = RATE_MCS_CHAN_WIDTH_20;
-               } else {
+                       break;
+               default:
                        IWL_ERR(mvm, "Invalid BW %d\n", sta->bandwidth);
                        goto out;
                }
+
                active_rate = lq_sta->active_siso_rate;
                rate->type = LQ_VHT_SISO;
+               rate->bw = rs_bw_from_sta_bw(sta);
        } else if (sta->ht_cap.ht_supported &&
                   best_rssi > IWL_RS_LOW_RSSI_THRESHOLD) {
                initial_rates = rs_optimal_rates_ht;
@@ -3057,6 +3090,9 @@ void iwl_mvm_update_frame_stats(struct iwl_mvm *mvm, u32 rate, bool agg)
        case RATE_MCS_CHAN_WIDTH_80:
                mvm->drv_rx_stats.bw_80_frames++;
                break;
+       case RATE_MCS_CHAN_WIDTH_160:
+               mvm->drv_rx_stats.bw_160_frames++;
+               break;
        default:
                WARN_ONCE(1, "bad BW. rate 0x%x", rate);
        }
@@ -3705,7 +3741,8 @@ static ssize_t rs_sta_dbgfs_scale_table_read(struct file *file,
                desc += sprintf(buff + desc, " %s",
                                (is_ht20(rate)) ? "20MHz" :
                                (is_ht40(rate)) ? "40MHz" :
-                               (is_ht80(rate)) ? "80Mhz" : "BAD BW");
+                               (is_ht80(rate)) ? "80MHz" :
+                               (is_ht160(rate)) ? "160MHz" : "BAD BW");
                desc += sprintf(buff + desc, " %s %s %s %s\n",
                                (rate->sgi) ? "SGI" : "NGI",
                                (rate->ldpc) ? "LDPC" : "BCC",
@@ -3787,9 +3824,10 @@ static ssize_t rs_sta_dbgfs_stats_table_read(struct file *file,
                                lq_sta->active_tbl == i ? "*" : "x",
                                rate->type,
                                rate->sgi,
-                               is_ht20(rate) ? "20Mhz" :
-                               is_ht40(rate) ? "40Mhz" :
-                               is_ht80(rate) ? "80Mhz" : "ERR",
+                               is_ht20(rate) ? "20MHz" :
+                               is_ht40(rate) ? "40MHz" :
+                               is_ht80(rate) ? "80MHz" :
+                               is_ht160(rate) ? "160MHz" : "ERR",
                                rate->index);
                for (j = 0; j < IWL_RATE_COUNT; j++) {
                        desc += sprintf(buff+desc,
index 90d046f..ee207f2 100644 (file)
@@ -205,6 +205,7 @@ struct rs_rate {
 #define is_ht20(rate)         ((rate)->bw == RATE_MCS_CHAN_WIDTH_20)
 #define is_ht40(rate)         ((rate)->bw == RATE_MCS_CHAN_WIDTH_40)
 #define is_ht80(rate)         ((rate)->bw == RATE_MCS_CHAN_WIDTH_80)
+#define is_ht160(rate)        ((rate)->bw == RATE_MCS_CHAN_WIDTH_160)
 
 #define IWL_MAX_MCS_DISPLAY_SIZE       12
 
@@ -362,7 +363,7 @@ void iwl_mvm_rs_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
 
 /* Notify RS about Tx status */
 void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
-                         int tid, struct ieee80211_tx_info *info);
+                         int tid, struct ieee80211_tx_info *info, bool ndp);
 
 /**
  * iwl_rate_control_register - Register the rate control algorithm callbacks
index ab7f7ed..0e60e38 100644 (file)
@@ -101,7 +101,7 @@ static void iwl_mvm_pass_packet_to_mac80211(struct iwl_mvm *mvm,
                                            struct napi_struct *napi,
                                            struct sk_buff *skb,
                                            struct ieee80211_hdr *hdr, u16 len,
-                                           u32 ampdu_status, u8 crypt_len,
+                                           u8 crypt_len,
                                            struct iwl_rx_cmd_buffer *rxb)
 {
        unsigned int hdrlen, fraglen;
@@ -268,7 +268,6 @@ void iwl_mvm_rx_rx_mpdu(struct iwl_mvm *mvm, struct napi_struct *napi,
        struct ieee80211_sta *sta = NULL;
        struct sk_buff *skb;
        u32 len;
-       u32 ampdu_status;
        u32 rate_n_flags;
        u32 rx_pkt_status;
        u8 crypt_len = 0;
@@ -354,13 +353,22 @@ void iwl_mvm_rx_rx_mpdu(struct iwl_mvm *mvm, struct napi_struct *napi,
 
        if (sta) {
                struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+               struct ieee80211_vif *tx_blocked_vif =
+                       rcu_dereference(mvm->csa_tx_blocked_vif);
 
                /* We have tx blocked stations (with CS bit). If we heard
                 * frames from a blocked station on a new channel we can
                 * TX to it again.
                 */
-               if (unlikely(mvm->csa_tx_block_bcn_timeout))
-                       iwl_mvm_sta_modify_disable_tx_ap(mvm, sta, false);
+               if (unlikely(tx_blocked_vif) &&
+                   mvmsta->vif == tx_blocked_vif) {
+                       struct iwl_mvm_vif *mvmvif =
+                               iwl_mvm_vif_from_mac80211(tx_blocked_vif);
+
+                       if (mvmvif->csa_target_freq == rx_status->freq)
+                               iwl_mvm_sta_modify_disable_tx_ap(mvm, sta,
+                                                                false);
+               }
 
                rs_update_last_rssi(mvm, &mvmsta->lq_sta, rx_status);
 
@@ -471,7 +479,7 @@ void iwl_mvm_rx_rx_mpdu(struct iwl_mvm *mvm, struct napi_struct *napi,
                iwl_mvm_ref(mvm, IWL_MVM_REF_RX);
 
        iwl_mvm_pass_packet_to_mac80211(mvm, sta, napi, skb, hdr, len,
-                                       ampdu_status, crypt_len, rxb);
+                                       crypt_len, rxb);
 
        if (take_ref)
                iwl_mvm_unref(mvm, IWL_MVM_REF_RX);
@@ -490,6 +498,7 @@ struct iwl_mvm_stat_data {
        __le32 mac_id;
        u8 beacon_filter_average_energy;
        struct mvm_statistics_general_v8 *general;
+       struct mvm_statistics_load *load;
 };
 
 static void iwl_mvm_stat_iterator(void *_data, u8 *mac,
@@ -606,13 +615,15 @@ iwl_mvm_rx_stats_check_trigger(struct iwl_mvm *mvm, struct iwl_rx_packet *pkt)
 void iwl_mvm_handle_rx_statistics(struct iwl_mvm *mvm,
                                  struct iwl_rx_packet *pkt)
 {
-       struct iwl_notif_statistics_v10 *stats = (void *)&pkt->data;
+       struct iwl_notif_statistics_v11 *stats = (void *)&pkt->data;
        struct iwl_mvm_stat_data data = {
                .mvm = mvm,
        };
+       int expected_size = iwl_mvm_has_new_rx_api(mvm) ? sizeof(*stats) :
+                           sizeof(struct iwl_notif_statistics_v10);
        u32 temperature;
 
-       if (iwl_rx_packet_payload_len(pkt) != sizeof(*stats))
+       if (iwl_rx_packet_payload_len(pkt) != expected_size)
                goto invalid;
 
        temperature = le32_to_cpu(stats->general.radio_temperature);
@@ -630,6 +641,25 @@ void iwl_mvm_handle_rx_statistics(struct iwl_mvm *mvm,
                le64_to_cpu(stats->general.on_time_scan);
 
        data.general = &stats->general;
+       if (iwl_mvm_has_new_rx_api(mvm)) {
+               int i;
+
+               data.load = &stats->load_stats;
+
+               rcu_read_lock();
+               for (i = 0; i < IWL_MVM_STATION_COUNT; i++) {
+                       struct iwl_mvm_sta *sta;
+
+                       if (!data.load->avg_energy[i])
+                               continue;
+
+                       sta = iwl_mvm_sta_from_staid_rcu(mvm, i);
+                       if (!sta)
+                               continue;
+                       sta->avg_energy = data.load->avg_energy[i];
+               }
+               rcu_read_unlock();
+       }
 
        iwl_mvm_rx_stats_check_trigger(mvm, pkt);
 
index 2c61516..df6c32c 100644 (file)
 #include "fw-api.h"
 #include "fw-dbg.h"
 
-void iwl_mvm_rx_phy_cmd_mq(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
-{
-       mvm->ampdu_ref++;
-
-#ifdef CONFIG_IWLWIFI_DEBUGFS
-       if (mvm->last_phy_info.phy_flags & cpu_to_le16(RX_RES_PHY_FLAGS_AGG)) {
-               spin_lock(&mvm->drv_stats_lock);
-               mvm->drv_rx_stats.ampdu_count++;
-               spin_unlock(&mvm->drv_stats_lock);
-       }
-#endif
-}
-
 static inline int iwl_mvm_check_pn(struct iwl_mvm *mvm, struct sk_buff *skb,
                                   int queue, struct ieee80211_sta *sta)
 {
@@ -489,6 +476,9 @@ void iwl_mvm_reorder_timer_expired(unsigned long data)
                rcu_read_lock();
                sta = rcu_dereference(buf->mvm->fw_id_to_mac_id[buf->sta_id]);
                /* SN is set to the last expired frame + 1 */
+               IWL_DEBUG_HT(buf->mvm,
+                            "Releasing expired frames for sta %u, sn %d\n",
+                            buf->sta_id, sn);
                iwl_mvm_release_frames(buf->mvm, sta, NULL, buf, sn);
                rcu_read_unlock();
        } else if (buf->num_stored) {
@@ -587,6 +577,8 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
        struct sk_buff *tail;
        u32 reorder = le32_to_cpu(desc->reorder_data);
        bool amsdu = desc->mac_flags2 & IWL_RX_MPDU_MFLG2_AMSDU;
+       bool last_subframe =
+               desc->amsdu_info & IWL_RX_MPDU_AMSDU_LAST_SUBFRAME;
        u8 tid = *ieee80211_get_qos_ctl(hdr) & IEEE80211_QOS_CTL_TID_MASK;
        u8 sub_frame_idx = desc->amsdu_info &
                           IWL_RX_MPDU_AMSDU_SUBFRAME_IDX_MASK;
@@ -653,7 +645,8 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
        /* release immediately if allowed by nssn and no stored frames */
        if (!buffer->num_stored && ieee80211_sn_less(sn, nssn)) {
                if (iwl_mvm_is_sn_less(buffer->head_sn, nssn,
-                                      buffer->buf_size))
+                                      buffer->buf_size) &&
+                  (!amsdu || last_subframe))
                        buffer->head_sn = nssn;
                /* No need to update AMSDU last SN - we are moving the head */
                spin_unlock_bh(&buffer->lock);
@@ -687,7 +680,20 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
                buffer->last_sub_index = sub_frame_idx;
        }
 
-       iwl_mvm_release_frames(mvm, sta, napi, buffer, nssn);
+       /*
+        * We cannot trust NSSN for AMSDU sub-frames that are not the last.
+        * The reason is that NSSN advances on the first sub-frame, and may
+        * cause the reorder buffer to advance before all the sub-frames arrive.
+        * Example: reorder buffer contains SN 0 & 2, and we receive AMSDU with
+        * SN 1. NSSN for first sub frame will be 3 with the result of driver
+        * releasing SN 0,1, 2. When sub-frame 1 arrives - reorder buffer is
+        * already ahead and it will be dropped.
+        * If the last sub-frame is not on this queue - we will get frame
+        * release notification with up to date NSSN.
+        */
+       if (!amsdu || last_subframe)
+               iwl_mvm_release_frames(mvm, sta, napi, buffer, nssn);
+
        spin_unlock_bh(&buffer->lock);
        return true;
 
@@ -736,6 +742,7 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
        struct ieee80211_hdr *hdr = (void *)(pkt->data + sizeof(*desc));
        u32 len = le16_to_cpu(desc->mpdu_len);
        u32 rate_n_flags = le32_to_cpu(desc->rate_n_flags);
+       u16 phy_info = le16_to_cpu(desc->phy_info);
        struct ieee80211_sta *sta = NULL;
        struct sk_buff *skb;
        u8 crypt_len = 0;
@@ -766,16 +773,34 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
                             le16_to_cpu(desc->status));
                rx_status->flag |= RX_FLAG_FAILED_FCS_CRC;
        }
-
-       rx_status->mactime = le64_to_cpu(desc->tsf_on_air_rise);
+       /* set the preamble flag if appropriate */
+       if (phy_info & IWL_RX_MPDU_PHY_SHORT_PREAMBLE)
+               rx_status->flag |= RX_FLAG_SHORTPRE;
+
+       if (likely(!(phy_info & IWL_RX_MPDU_PHY_TSF_OVERLOAD))) {
+               rx_status->mactime = le64_to_cpu(desc->tsf_on_air_rise);
+               /* TSF as indicated by the firmware is at INA time */
+               rx_status->flag |= RX_FLAG_MACTIME_PLCP_START;
+       }
        rx_status->device_timestamp = le32_to_cpu(desc->gp2_on_air_rise);
        rx_status->band = desc->channel > 14 ? NL80211_BAND_5GHZ :
                                               NL80211_BAND_2GHZ;
        rx_status->freq = ieee80211_channel_to_frequency(desc->channel,
                                                         rx_status->band);
        iwl_mvm_get_signal_strength(mvm, desc, rx_status);
-       /* TSF as indicated by the firmware is at INA time */
-       rx_status->flag |= RX_FLAG_MACTIME_PLCP_START;
+
+       /* update aggregation data for monitor sake on default queue */
+       if (!queue && (phy_info & IWL_RX_MPDU_PHY_AMPDU)) {
+               bool toggle_bit = phy_info & IWL_RX_MPDU_PHY_AMPDU_TOGGLE;
+
+               rx_status->flag |= RX_FLAG_AMPDU_DETAILS;
+               rx_status->ampdu_reference = mvm->ampdu_ref;
+               /* toggle is switched whenever new aggregation starts */
+               if (toggle_bit != mvm->ampdu_toggle) {
+                       mvm->ampdu_ref++;
+                       mvm->ampdu_toggle = toggle_bit;
+               }
+       }
 
        rcu_read_lock();
 
@@ -797,6 +822,8 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
 
        if (sta) {
                struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+               struct ieee80211_vif *tx_blocked_vif =
+                       rcu_dereference(mvm->csa_tx_blocked_vif);
                u8 baid = (u8)((le32_to_cpu(desc->reorder_data) &
                               IWL_RX_MPDU_REORDER_BAID_MASK) >>
                               IWL_RX_MPDU_REORDER_BAID_SHIFT);
@@ -806,8 +833,15 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
                 * frames from a blocked station on a new channel we can
                 * TX to it again.
                 */
-               if (unlikely(mvm->csa_tx_block_bcn_timeout))
-                       iwl_mvm_sta_modify_disable_tx_ap(mvm, sta, false);
+               if (unlikely(tx_blocked_vif) &&
+                   tx_blocked_vif == mvmsta->vif) {
+                       struct iwl_mvm_vif *mvmvif =
+                               iwl_mvm_vif_from_mac80211(tx_blocked_vif);
+
+                       if (mvmvif->csa_target_freq == rx_status->freq)
+                               iwl_mvm_sta_modify_disable_tx_ap(mvm, sta,
+                                                                false);
+               }
 
                rs_update_last_rssi(mvm, &mvmsta->lq_sta, rx_status);
 
@@ -830,8 +864,6 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
                                iwl_mvm_fw_dbg_collect_trig(mvm, trig, NULL);
                }
 
-               /* TODO: multi queue TCM */
-
                if (ieee80211_is_data(hdr->frame_control))
                        iwl_mvm_rx_csum(sta, skb, desc);
 
@@ -856,14 +888,6 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
                        iwl_mvm_agg_rx_received(mvm, baid);
        }
 
-       /*
-        * TODO: PHY info.
-        * Verify we don't have the information in the MPDU descriptor and
-        * that it is not needed.
-        * Make sure for monitor mode that we are on default queue, update
-        * ampdu_ref and the rest of phy info then
-        */
-
        /* Set up the HT phy flags */
        switch (rate_n_flags & RATE_MCS_CHAN_WIDTH_MSK) {
        case RATE_MCS_CHAN_WIDTH_20:
@@ -907,8 +931,18 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
                                                            rx_status->band);
        }
 
-       /* TODO: PHY info - update ampdu queue statistics (for debugfs) */
-       /* TODO: PHY info - gscan */
+       /* management stuff on default queue */
+       if (!queue) {
+               if (unlikely((ieee80211_is_beacon(hdr->frame_control) ||
+                             ieee80211_is_probe_resp(hdr->frame_control)) &&
+                            mvm->sched_scan_pass_all ==
+                            SCHED_SCAN_PASS_ALL_ENABLED))
+                       mvm->sched_scan_pass_all = SCHED_SCAN_PASS_ALL_FOUND;
+
+               if (unlikely(ieee80211_is_beacon(hdr->frame_control) ||
+                            ieee80211_is_probe_resp(hdr->frame_control)))
+                       rx_status->boottime_ns = ktime_get_boot_ns();
+       }
 
        iwl_mvm_create_skb(skb, hdr, len, crypt_len, rxb);
        if (!iwl_mvm_reorder(mvm, napi, queue, sta, skb, desc))
@@ -927,6 +961,9 @@ void iwl_mvm_rx_frame_release(struct iwl_mvm *mvm, struct napi_struct *napi,
 
        int baid = release->baid;
 
+       IWL_DEBUG_HT(mvm, "Frame release notification for BAID %u, NSSN %d\n",
+                    release->baid, le16_to_cpu(release->nssn));
+
        if (WARN_ON_ONCE(baid == IWL_RX_REORDER_DATA_INVALID_BAID))
                return;
 
index 1cac10c..dac120f 100644 (file)
@@ -402,7 +402,7 @@ void iwl_mvm_rx_lmac_scan_complete_notif(struct iwl_mvm *mvm,
                mvm->scan_status &= ~IWL_MVM_SCAN_REGULAR;
                ieee80211_scan_completed(mvm->hw, &info);
                iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN);
-               del_timer(&mvm->scan_timer);
+               cancel_delayed_work(&mvm->scan_timeout_dwork);
        } else {
                IWL_ERR(mvm,
                        "got scan complete notification but no scan is running\n");
@@ -1225,15 +1225,16 @@ static int iwl_mvm_check_running_scans(struct iwl_mvm *mvm, int type)
        return -EIO;
 }
 
-#define SCAN_TIMEOUT (20 * HZ)
+#define SCAN_TIMEOUT 20000
 
-void iwl_mvm_scan_timeout(unsigned long data)
+void iwl_mvm_scan_timeout_wk(struct work_struct *work)
 {
-       struct iwl_mvm *mvm = (struct iwl_mvm *)data;
+       struct delayed_work *delayed_work = to_delayed_work(work);
+       struct iwl_mvm *mvm = container_of(delayed_work, struct iwl_mvm,
+                                          scan_timeout_dwork);
 
        IWL_ERR(mvm, "regular scan timed out\n");
 
-       del_timer(&mvm->scan_timer);
        iwl_force_nmi(mvm->trans);
 }
 
@@ -1316,7 +1317,8 @@ int iwl_mvm_reg_scan_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
        mvm->scan_status |= IWL_MVM_SCAN_REGULAR;
        iwl_mvm_ref(mvm, IWL_MVM_REF_SCAN);
 
-       mod_timer(&mvm->scan_timer, jiffies + SCAN_TIMEOUT);
+       queue_delayed_work(system_wq, &mvm->scan_timeout_dwork,
+                          msecs_to_jiffies(SCAN_TIMEOUT));
 
        return 0;
 }
@@ -1439,7 +1441,7 @@ void iwl_mvm_rx_umac_scan_complete_notif(struct iwl_mvm *mvm,
 
                ieee80211_scan_completed(mvm->hw, &info);
                iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN);
-               del_timer(&mvm->scan_timer);
+               cancel_delayed_work(&mvm->scan_timeout_dwork);
        } else if (mvm->scan_uid_status[uid] == IWL_MVM_SCAN_SCHED) {
                ieee80211_sched_scan_stopped(mvm->hw);
                mvm->sched_scan_pass_all = SCHED_SCAN_PASS_ALL_DISABLED;
@@ -1644,7 +1646,7 @@ out:
                 * to release the scan reference here.
                 */
                iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN);
-               del_timer(&mvm->scan_timer);
+               cancel_delayed_work(&mvm->scan_timeout_dwork);
                if (notify) {
                        struct cfg80211_scan_info info = {
                                .aborted = true,
index 443a428..101fb04 100644 (file)
@@ -215,7 +215,7 @@ static int iwl_mvm_sf_config(struct iwl_mvm *mvm, u8 sta_id,
                             enum iwl_sf_state new_state)
 {
        struct iwl_sf_cfg_cmd sf_cmd = {
-               .state = cpu_to_le32(SF_FULL_ON),
+               .state = cpu_to_le32(new_state),
        };
        struct ieee80211_sta *sta;
        int ret = 0;
index b23ab4a..3130b9c 100644 (file)
@@ -310,6 +310,304 @@ static void iwl_mvm_tdls_sta_deinit(struct iwl_mvm *mvm,
                iwl_mvm_disable_txq(mvm, i, i, IWL_MAX_TID_COUNT, 0);
 }
 
+/* Disable aggregations for a bitmap of TIDs for a given station */
+static int iwl_mvm_invalidate_sta_queue(struct iwl_mvm *mvm, int queue,
+                                       unsigned long disable_agg_tids,
+                                       bool remove_queue)
+{
+       struct iwl_mvm_add_sta_cmd cmd = {};
+       struct ieee80211_sta *sta;
+       struct iwl_mvm_sta *mvmsta;
+       u32 status;
+       u8 sta_id;
+       int ret;
+
+       spin_lock_bh(&mvm->queue_info_lock);
+       sta_id = mvm->queue_info[queue].ra_sta_id;
+       spin_unlock_bh(&mvm->queue_info_lock);
+
+       rcu_read_lock();
+
+       sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_id]);
+
+       if (WARN_ON_ONCE(IS_ERR_OR_NULL(sta))) {
+               rcu_read_unlock();
+               return -EINVAL;
+       }
+
+       mvmsta = iwl_mvm_sta_from_mac80211(sta);
+
+       mvmsta->tid_disable_agg |= disable_agg_tids;
+
+       cmd.mac_id_n_color = cpu_to_le32(mvmsta->mac_id_n_color);
+       cmd.sta_id = mvmsta->sta_id;
+       cmd.add_modify = STA_MODE_MODIFY;
+       cmd.modify_mask = STA_MODIFY_QUEUES;
+       if (disable_agg_tids)
+               cmd.modify_mask |= STA_MODIFY_TID_DISABLE_TX;
+       if (remove_queue)
+               cmd.modify_mask |= STA_MODIFY_QUEUE_REMOVAL;
+       cmd.tfd_queue_msk = cpu_to_le32(mvmsta->tfd_queue_msk);
+       cmd.tid_disable_tx = cpu_to_le16(mvmsta->tid_disable_agg);
+
+       rcu_read_unlock();
+
+       /* Notify FW of queue removal from the STA queues */
+       status = ADD_STA_SUCCESS;
+       ret = iwl_mvm_send_cmd_pdu_status(mvm, ADD_STA,
+                                         iwl_mvm_add_sta_cmd_size(mvm),
+                                         &cmd, &status);
+
+       return ret;
+}
+
+static int iwl_mvm_get_queue_agg_tids(struct iwl_mvm *mvm, int queue)
+{
+       struct ieee80211_sta *sta;
+       struct iwl_mvm_sta *mvmsta;
+       unsigned long tid_bitmap;
+       unsigned long agg_tids = 0;
+       s8 sta_id;
+       int tid;
+
+       lockdep_assert_held(&mvm->mutex);
+
+       spin_lock_bh(&mvm->queue_info_lock);
+       sta_id = mvm->queue_info[queue].ra_sta_id;
+       tid_bitmap = mvm->queue_info[queue].tid_bitmap;
+       spin_unlock_bh(&mvm->queue_info_lock);
+
+       sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[sta_id],
+                                       lockdep_is_held(&mvm->mutex));
+
+       if (WARN_ON_ONCE(IS_ERR_OR_NULL(sta)))
+               return -EINVAL;
+
+       mvmsta = iwl_mvm_sta_from_mac80211(sta);
+
+       spin_lock_bh(&mvmsta->lock);
+       for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) {
+               if (mvmsta->tid_data[tid].state == IWL_AGG_ON)
+                       agg_tids |= BIT(tid);
+       }
+       spin_unlock_bh(&mvmsta->lock);
+
+       return agg_tids;
+}
+
+/*
+ * Remove a queue from a station's resources.
+ * Note that this only marks as free. It DOESN'T delete a BA agreement, and
+ * doesn't disable the queue
+ */
+static int iwl_mvm_remove_sta_queue_marking(struct iwl_mvm *mvm, int queue)
+{
+       struct ieee80211_sta *sta;
+       struct iwl_mvm_sta *mvmsta;
+       unsigned long tid_bitmap;
+       unsigned long disable_agg_tids = 0;
+       u8 sta_id;
+       int tid;
+
+       lockdep_assert_held(&mvm->mutex);
+
+       spin_lock_bh(&mvm->queue_info_lock);
+       sta_id = mvm->queue_info[queue].ra_sta_id;
+       tid_bitmap = mvm->queue_info[queue].tid_bitmap;
+       spin_unlock_bh(&mvm->queue_info_lock);
+
+       rcu_read_lock();
+
+       sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_id]);
+
+       if (WARN_ON_ONCE(IS_ERR_OR_NULL(sta))) {
+               rcu_read_unlock();
+               return 0;
+       }
+
+       mvmsta = iwl_mvm_sta_from_mac80211(sta);
+
+       spin_lock_bh(&mvmsta->lock);
+       /* Unmap MAC queues and TIDs from this queue */
+       for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) {
+               if (mvmsta->tid_data[tid].state == IWL_AGG_ON)
+                       disable_agg_tids |= BIT(tid);
+               mvmsta->tid_data[tid].txq_id = IEEE80211_INVAL_HW_QUEUE;
+       }
+
+       mvmsta->tfd_queue_msk &= ~BIT(queue); /* Don't use this queue anymore */
+       spin_unlock_bh(&mvmsta->lock);
+
+       rcu_read_unlock();
+
+       spin_lock_bh(&mvm->queue_info_lock);
+       /* Unmap MAC queues and TIDs from this queue */
+       mvm->queue_info[queue].hw_queue_to_mac80211 = 0;
+       mvm->queue_info[queue].hw_queue_refcount = 0;
+       mvm->queue_info[queue].tid_bitmap = 0;
+       spin_unlock_bh(&mvm->queue_info_lock);
+
+       return disable_agg_tids;
+}
+
+static int iwl_mvm_get_shared_queue(struct iwl_mvm *mvm,
+                                   unsigned long tfd_queue_mask, u8 ac)
+{
+       int queue = 0;
+       u8 ac_to_queue[IEEE80211_NUM_ACS];
+       int i;
+
+       lockdep_assert_held(&mvm->queue_info_lock);
+
+       memset(&ac_to_queue, IEEE80211_INVAL_HW_QUEUE, sizeof(ac_to_queue));
+
+       /* See what ACs the existing queues for this STA have */
+       for_each_set_bit(i, &tfd_queue_mask, IWL_MVM_DQA_MAX_DATA_QUEUE) {
+               /* Only DATA queues can be shared */
+               if (i < IWL_MVM_DQA_MIN_DATA_QUEUE &&
+                   i != IWL_MVM_DQA_BSS_CLIENT_QUEUE)
+                       continue;
+
+               ac_to_queue[mvm->queue_info[i].mac80211_ac] = i;
+       }
+
+       /*
+        * The queue to share is chosen only from DATA queues as follows (in
+        * descending priority):
+        * 1. An AC_BE queue
+        * 2. Same AC queue
+        * 3. Highest AC queue that is lower than new AC
+        * 4. Any existing AC (there always is at least 1 DATA queue)
+        */
+
+       /* Priority 1: An AC_BE queue */
+       if (ac_to_queue[IEEE80211_AC_BE] != IEEE80211_INVAL_HW_QUEUE)
+               queue = ac_to_queue[IEEE80211_AC_BE];
+       /* Priority 2: Same AC queue */
+       else if (ac_to_queue[ac] != IEEE80211_INVAL_HW_QUEUE)
+               queue = ac_to_queue[ac];
+       /* Priority 3a: If new AC is VO and VI exists - use VI */
+       else if (ac == IEEE80211_AC_VO &&
+                ac_to_queue[IEEE80211_AC_VI] != IEEE80211_INVAL_HW_QUEUE)
+               queue = ac_to_queue[IEEE80211_AC_VI];
+       /* Priority 3b: No BE so only AC less than the new one is BK */
+       else if (ac_to_queue[IEEE80211_AC_BK] != IEEE80211_INVAL_HW_QUEUE)
+               queue = ac_to_queue[IEEE80211_AC_BK];
+       /* Priority 4a: No BE nor BK - use VI if exists */
+       else if (ac_to_queue[IEEE80211_AC_VI] != IEEE80211_INVAL_HW_QUEUE)
+               queue = ac_to_queue[IEEE80211_AC_VI];
+       /* Priority 4b: No BE, BK nor VI - use VO if exists */
+       else if (ac_to_queue[IEEE80211_AC_VO] != IEEE80211_INVAL_HW_QUEUE)
+               queue = ac_to_queue[IEEE80211_AC_VO];
+
+       /* Make sure queue found (or not) is legal */
+       if (!((queue >= IWL_MVM_DQA_MIN_MGMT_QUEUE &&
+              queue <= IWL_MVM_DQA_MAX_MGMT_QUEUE) ||
+             (queue >= IWL_MVM_DQA_MIN_DATA_QUEUE &&
+              queue <= IWL_MVM_DQA_MAX_DATA_QUEUE) ||
+             (queue == IWL_MVM_DQA_BSS_CLIENT_QUEUE))) {
+               IWL_ERR(mvm, "No DATA queues available to share\n");
+               queue = -ENOSPC;
+       }
+
+       return queue;
+}
+
+/*
+ * If a given queue has a higher AC than the TID stream that is being added to
+ * it, the queue needs to be redirected to the lower AC. This function does that
+ * in such a case, otherwise - if no redirection required - it does nothing,
+ * unless the %force param is true.
+ */
+static int iwl_mvm_scd_queue_redirect(struct iwl_mvm *mvm, int queue, int tid,
+                                     int ac, int ssn, unsigned int wdg_timeout,
+                                     bool force)
+{
+       struct iwl_scd_txq_cfg_cmd cmd = {
+               .scd_queue = queue,
+               .enable = 0,
+       };
+       bool shared_queue;
+       unsigned long mq;
+       int ret;
+
+       /*
+        * If the AC is lower than current one - FIFO needs to be redirected to
+        * the lowest one of the streams in the queue. Check if this is needed
+        * here.
+        * Notice that the enum ieee80211_ac_numbers is "flipped", so BK is with
+        * value 3 and VO with value 0, so to check if ac X is lower than ac Y
+        * we need to check if the numerical value of X is LARGER than of Y.
+        */
+       spin_lock_bh(&mvm->queue_info_lock);
+       if (ac <= mvm->queue_info[queue].mac80211_ac && !force) {
+               spin_unlock_bh(&mvm->queue_info_lock);
+
+               IWL_DEBUG_TX_QUEUES(mvm,
+                                   "No redirection needed on TXQ #%d\n",
+                                   queue);
+               return 0;
+       }
+
+       cmd.sta_id = mvm->queue_info[queue].ra_sta_id;
+       cmd.tx_fifo = iwl_mvm_ac_to_tx_fifo[mvm->queue_info[queue].mac80211_ac];
+       mq = mvm->queue_info[queue].hw_queue_to_mac80211;
+       shared_queue = (mvm->queue_info[queue].hw_queue_refcount > 1);
+       spin_unlock_bh(&mvm->queue_info_lock);
+
+       IWL_DEBUG_TX_QUEUES(mvm, "Redirecting shared TXQ #%d to FIFO #%d\n",
+                           queue, iwl_mvm_ac_to_tx_fifo[ac]);
+
+       /* Stop MAC queues and wait for this queue to empty */
+       iwl_mvm_stop_mac_queues(mvm, mq);
+       ret = iwl_trans_wait_tx_queue_empty(mvm->trans, BIT(queue));
+       if (ret) {
+               IWL_ERR(mvm, "Error draining queue %d before reconfig\n",
+                       queue);
+               ret = -EIO;
+               goto out;
+       }
+
+       /* Before redirecting the queue we need to de-activate it */
+       iwl_trans_txq_disable(mvm->trans, queue, false);
+       ret = iwl_mvm_send_cmd_pdu(mvm, SCD_QUEUE_CFG, 0, sizeof(cmd), &cmd);
+       if (ret)
+               IWL_ERR(mvm, "Failed SCD disable TXQ %d (ret=%d)\n", queue,
+                       ret);
+
+       /* Make sure the SCD wrptr is correctly set before reconfiguring */
+       iwl_trans_txq_enable(mvm->trans, queue, iwl_mvm_ac_to_tx_fifo[ac],
+                            cmd.sta_id, tid, LINK_QUAL_AGG_FRAME_LIMIT_DEF,
+                            ssn, wdg_timeout);
+
+       /* TODO: Work-around SCD bug when moving back by multiples of 0x40 */
+
+       /* Redirect to lower AC */
+       iwl_mvm_reconfig_scd(mvm, queue, iwl_mvm_ac_to_tx_fifo[ac],
+                            cmd.sta_id, tid, LINK_QUAL_AGG_FRAME_LIMIT_DEF,
+                            ssn);
+
+       /* Update AC marking of the queue */
+       spin_lock_bh(&mvm->queue_info_lock);
+       mvm->queue_info[queue].mac80211_ac = ac;
+       spin_unlock_bh(&mvm->queue_info_lock);
+
+       /*
+        * Mark queue as shared in transport if shared
+        * Note this has to be done after queue enablement because enablement
+        * can also set this value, and there is no indication there to shared
+        * queues
+        */
+       if (shared_queue)
+               iwl_trans_txq_set_shared_mode(mvm->trans, queue, true);
+
+out:
+       /* Continue using the MAC queues */
+       iwl_mvm_start_mac_queues(mvm, mq);
+
+       return ret;
+}
+
 static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
                                   struct ieee80211_sta *sta, u8 ac, int tid,
                                   struct ieee80211_hdr *hdr)
@@ -325,11 +623,20 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
                iwl_mvm_get_wd_timeout(mvm, mvmsta->vif, false, false);
        u8 mac_queue = mvmsta->vif->hw_queue[ac];
        int queue = -1;
+       bool using_inactive_queue = false;
+       unsigned long disable_agg_tids = 0;
+       enum iwl_mvm_agg_state queue_state;
+       bool shared_queue = false;
        int ssn;
+       unsigned long tfd_queue_mask;
        int ret;
 
        lockdep_assert_held(&mvm->mutex);
 
+       spin_lock_bh(&mvmsta->lock);
+       tfd_queue_mask = mvmsta->tfd_queue_msk;
+       spin_unlock_bh(&mvmsta->lock);
+
        spin_lock_bh(&mvm->queue_info_lock);
 
        /*
@@ -338,7 +645,8 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
         */
        if (!ieee80211_is_data_qos(hdr->frame_control) ||
            ieee80211_is_qos_nullfunc(hdr->frame_control)) {
-               queue = iwl_mvm_find_free_queue(mvm, IWL_MVM_DQA_MIN_MGMT_QUEUE,
+               queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id,
+                                               IWL_MVM_DQA_MIN_MGMT_QUEUE,
                                                IWL_MVM_DQA_MAX_MGMT_QUEUE);
                if (queue >= IWL_MVM_DQA_MIN_MGMT_QUEUE)
                        IWL_DEBUG_TX_QUEUES(mvm, "Found free MGMT queue #%d\n",
@@ -347,29 +655,62 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
                /* If no such queue is found, we'll use a DATA queue instead */
        }
 
-       if (queue < 0 && mvmsta->reserved_queue != IEEE80211_INVAL_HW_QUEUE) {
+       if ((queue < 0 && mvmsta->reserved_queue != IEEE80211_INVAL_HW_QUEUE) &&
+           (mvm->queue_info[mvmsta->reserved_queue].status ==
+            IWL_MVM_QUEUE_RESERVED ||
+            mvm->queue_info[mvmsta->reserved_queue].status ==
+            IWL_MVM_QUEUE_INACTIVE)) {
                queue = mvmsta->reserved_queue;
+               mvm->queue_info[queue].reserved = true;
                IWL_DEBUG_TX_QUEUES(mvm, "Using reserved queue #%d\n", queue);
        }
 
        if (queue < 0)
-               queue = iwl_mvm_find_free_queue(mvm, IWL_MVM_DQA_MIN_DATA_QUEUE,
+               queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id,
+                                               IWL_MVM_DQA_MIN_DATA_QUEUE,
                                                IWL_MVM_DQA_MAX_DATA_QUEUE);
 
+       /*
+        * Check if this queue is already allocated but inactive.
+        * In such a case, we'll need to first free this queue before enabling
+        * it again, so we'll mark it as reserved to make sure no new traffic
+        * arrives on it
+        */
+       if (queue > 0 &&
+           mvm->queue_info[queue].status == IWL_MVM_QUEUE_INACTIVE) {
+               mvm->queue_info[queue].status = IWL_MVM_QUEUE_RESERVED;
+               using_inactive_queue = true;
+               IWL_DEBUG_TX_QUEUES(mvm,
+                                   "Re-assigning TXQ %d: sta_id=%d, tid=%d\n",
+                                   queue, mvmsta->sta_id, tid);
+       }
+
+       /* No free queue - we'll have to share */
+       if (queue <= 0) {
+               queue = iwl_mvm_get_shared_queue(mvm, tfd_queue_mask, ac);
+               if (queue > 0) {
+                       shared_queue = true;
+                       mvm->queue_info[queue].status = IWL_MVM_QUEUE_SHARED;
+               }
+       }
+
        /*
         * Mark TXQ as ready, even though it hasn't been fully configured yet,
         * to make sure no one else takes it.
         * This will allow avoiding re-acquiring the lock at the end of the
         * configuration. On error we'll mark it back as free.
         */
-       if (queue >= 0)
+       if ((queue > 0) && !shared_queue)
                mvm->queue_info[queue].status = IWL_MVM_QUEUE_READY;
 
        spin_unlock_bh(&mvm->queue_info_lock);
 
-       /* TODO: support shared queues for same RA */
-       if (queue < 0)
+       /* This shouldn't happen - out of queues */
+       if (WARN_ON(queue <= 0)) {
+               IWL_ERR(mvm, "No available queues for tid %d on sta_id %d\n",
+                       tid, cfg.sta_id);
                return -ENOSPC;
+       }
 
        /*
         * Actual en/disablement of aggregations is through the ADD_STA HCMD,
@@ -380,24 +721,103 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm,
        cfg.aggregate = (queue >= IWL_MVM_DQA_MIN_DATA_QUEUE ||
                         queue == IWL_MVM_DQA_BSS_CLIENT_QUEUE);
 
-       IWL_DEBUG_TX_QUEUES(mvm, "Allocating queue #%d to sta %d on tid %d\n",
-                           queue, mvmsta->sta_id, tid);
+       /*
+        * If this queue was previously inactive (idle) - we need to free it
+        * first
+        */
+       if (using_inactive_queue) {
+               struct iwl_scd_txq_cfg_cmd cmd = {
+                       .scd_queue = queue,
+                       .enable = 0,
+               };
+               u8 ac;
+
+               disable_agg_tids = iwl_mvm_remove_sta_queue_marking(mvm, queue);
+
+               spin_lock_bh(&mvm->queue_info_lock);
+               ac = mvm->queue_info[queue].mac80211_ac;
+               cmd.sta_id = mvm->queue_info[queue].ra_sta_id;
+               cmd.tx_fifo = iwl_mvm_ac_to_tx_fifo[ac];
+               spin_unlock_bh(&mvm->queue_info_lock);
+
+               /* Disable the queue */
+               iwl_mvm_invalidate_sta_queue(mvm, queue, disable_agg_tids,
+                                            true);
+               iwl_trans_txq_disable(mvm->trans, queue, false);
+               ret = iwl_mvm_send_cmd_pdu(mvm, SCD_QUEUE_CFG, 0, sizeof(cmd),
+                                          &cmd);
+               if (ret) {
+                       IWL_ERR(mvm,
+                               "Failed to free inactive queue %d (ret=%d)\n",
+                               queue, ret);
+
+                       /* Re-mark the inactive queue as inactive */
+                       spin_lock_bh(&mvm->queue_info_lock);
+                       mvm->queue_info[queue].status = IWL_MVM_QUEUE_INACTIVE;
+                       spin_unlock_bh(&mvm->queue_info_lock);
+
+                       return ret;
+               }
+       }
+
+       IWL_DEBUG_TX_QUEUES(mvm,
+                           "Allocating %squeue #%d to sta %d on tid %d\n",
+                           shared_queue ? "shared " : "", queue,
+                           mvmsta->sta_id, tid);
+
+       if (shared_queue) {
+               /* Disable any open aggs on this queue */
+               disable_agg_tids = iwl_mvm_get_queue_agg_tids(mvm, queue);
+
+               if (disable_agg_tids) {
+                       IWL_DEBUG_TX_QUEUES(mvm, "Disabling aggs on queue %d\n",
+                                           queue);
+                       iwl_mvm_invalidate_sta_queue(mvm, queue,
+                                                    disable_agg_tids, false);
+               }
+       }
 
        ssn = IEEE80211_SEQ_TO_SN(le16_to_cpu(hdr->seq_ctrl));
        iwl_mvm_enable_txq(mvm, queue, mac_queue, ssn, &cfg,
                           wdg_timeout);
 
+       /*
+        * Mark queue as shared in transport if shared
+        * Note this has to be done after queue enablement because enablement
+        * can also set this value, and there is no indication there to shared
+        * queues
+        */
+       if (shared_queue)
+               iwl_trans_txq_set_shared_mode(mvm->trans, queue, true);
+
        spin_lock_bh(&mvmsta->lock);
        mvmsta->tid_data[tid].txq_id = queue;
+       mvmsta->tid_data[tid].is_tid_active = true;
        mvmsta->tfd_queue_msk |= BIT(queue);
+       queue_state = mvmsta->tid_data[tid].state;
 
        if (mvmsta->reserved_queue == queue)
                mvmsta->reserved_queue = IEEE80211_INVAL_HW_QUEUE;
        spin_unlock_bh(&mvmsta->lock);
 
-       ret = iwl_mvm_sta_send_to_fw(mvm, sta, true, STA_MODIFY_QUEUES);
-       if (ret)
-               goto out_err;
+       if (!shared_queue) {
+               ret = iwl_mvm_sta_send_to_fw(mvm, sta, true, STA_MODIFY_QUEUES);
+               if (ret)
+                       goto out_err;
+
+               /* If we need to re-enable aggregations... */
+               if (queue_state == IWL_AGG_ON) {
+                       ret = iwl_mvm_sta_tx_agg(mvm, sta, tid, queue, true);
+                       if (ret)
+                               goto out_err;
+               }
+       } else {
+               /* Redirect queue, if needed */
+               ret = iwl_mvm_scd_queue_redirect(mvm, queue, tid, ac, ssn,
+                                                wdg_timeout, false);
+               if (ret)
+                       goto out_err;
+       }
 
        return 0;
 
@@ -476,6 +896,9 @@ void iwl_mvm_add_new_dqa_stream_wk(struct work_struct *wk)
        unsigned long deferred_tid_traffic;
        int sta_id, tid;
 
+       /* Check inactivity of queues */
+       iwl_mvm_inactivity_check(mvm);
+
        mutex_lock(&mvm->mutex);
 
        /* Go over all stations with deferred traffic */
@@ -505,6 +928,12 @@ static int iwl_mvm_reserve_sta_stream(struct iwl_mvm *mvm,
        struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
        int queue;
 
+       /*
+        * Check for inactive queues, so we don't reach a situation where we
+        * can't add a STA due to a shortage in queues that doesn't really exist
+        */
+       iwl_mvm_inactivity_check(mvm);
+
        spin_lock_bh(&mvm->queue_info_lock);
 
        /* Make sure we have free resources for this STA */
@@ -514,7 +943,8 @@ static int iwl_mvm_reserve_sta_stream(struct iwl_mvm *mvm,
             IWL_MVM_QUEUE_FREE))
                queue = IWL_MVM_DQA_BSS_CLIENT_QUEUE;
        else
-               queue = iwl_mvm_find_free_queue(mvm, IWL_MVM_DQA_MIN_DATA_QUEUE,
+               queue = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id,
+                                               IWL_MVM_DQA_MIN_DATA_QUEUE,
                                                IWL_MVM_DQA_MAX_DATA_QUEUE);
        if (queue < 0) {
                spin_unlock_bh(&mvm->queue_info_lock);
@@ -568,8 +998,11 @@ int iwl_mvm_add_sta(struct iwl_mvm *mvm,
        mvm_sta->tid_disable_agg = 0xffff; /* No aggs at first */
        mvm_sta->tfd_queue_msk = 0;
 
-       /* allocate new queues for a TDLS station */
-       if (sta->tdls) {
+       /*
+        * Allocate new queues for a TDLS station, unless we're in DQA mode,
+        * and then they'll be allocated dynamically
+        */
+       if (!iwl_mvm_is_dqa_supported(mvm) && sta->tdls) {
                ret = iwl_mvm_tdls_sta_init(mvm, sta);
                if (ret)
                        return ret;
@@ -633,7 +1066,8 @@ int iwl_mvm_add_sta(struct iwl_mvm *mvm,
        return 0;
 
 err:
-       iwl_mvm_tdls_sta_deinit(mvm, sta);
+       if (!iwl_mvm_is_dqa_supported(mvm) && sta->tdls)
+               iwl_mvm_tdls_sta_deinit(mvm, sta);
        return ret;
 }
 
@@ -819,8 +1253,9 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
        if (iwl_mvm_has_new_rx_api(mvm))
                kfree(mvm_sta->dup_data);
 
-       if (vif->type == NL80211_IFTYPE_STATION &&
-           mvmvif->ap_sta_id == mvm_sta->sta_id) {
+       if ((vif->type == NL80211_IFTYPE_STATION &&
+            mvmvif->ap_sta_id == mvm_sta->sta_id) ||
+           iwl_mvm_is_dqa_supported(mvm)){
                ret = iwl_mvm_drain_sta(mvm, mvm_sta, true);
                if (ret)
                        return ret;
@@ -838,16 +1273,19 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
                if (iwl_mvm_is_dqa_supported(mvm))
                        iwl_mvm_disable_sta_queues(mvm, vif, mvm_sta);
 
-               /* if we are associated - we can't remove the AP STA now */
-               if (vif->bss_conf.assoc)
-                       return ret;
+               if (vif->type == NL80211_IFTYPE_STATION &&
+                   mvmvif->ap_sta_id == mvm_sta->sta_id) {
+                       /* if associated - we can't remove the AP STA now */
+                       if (vif->bss_conf.assoc)
+                               return ret;
 
-               /* unassoc - go ahead - remove the AP STA now */
-               mvmvif->ap_sta_id = IWL_MVM_STATION_COUNT;
+                       /* unassoc - go ahead - remove the AP STA now */
+                       mvmvif->ap_sta_id = IWL_MVM_STATION_COUNT;
 
-               /* clear d0i3_ap_sta_id if no longer relevant */
-               if (mvm->d0i3_ap_sta_id == mvm_sta->sta_id)
-                       mvm->d0i3_ap_sta_id = IWL_MVM_STATION_COUNT;
+                       /* clear d0i3_ap_sta_id if no longer relevant */
+                       if (mvm->d0i3_ap_sta_id == mvm_sta->sta_id)
+                               mvm->d0i3_ap_sta_id = IWL_MVM_STATION_COUNT;
+               }
        }
 
        /*
@@ -885,7 +1323,7 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
        } else {
                spin_unlock_bh(&mvm_sta->lock);
 
-               if (sta->tdls)
+               if (!iwl_mvm_is_dqa_supported(mvm) && sta->tdls)
                        iwl_mvm_tdls_sta_deinit(mvm, sta);
 
                ret = iwl_mvm_rm_sta_common(mvm, mvm_sta->sta_id);
@@ -983,8 +1421,9 @@ int iwl_mvm_add_aux_sta(struct iwl_mvm *mvm)
        lockdep_assert_held(&mvm->mutex);
 
        /* Map Aux queue to fifo - needs to happen before adding Aux station */
-       iwl_mvm_enable_ac_txq(mvm, mvm->aux_queue, mvm->aux_queue,
-                             IWL_MVM_TX_FIFO_MCAST, 0, wdg_timeout);
+       if (!iwl_mvm_is_dqa_supported(mvm))
+               iwl_mvm_enable_ac_txq(mvm, mvm->aux_queue, mvm->aux_queue,
+                                     IWL_MVM_TX_FIFO_MCAST, 0, wdg_timeout);
 
        /* Allocate aux station and assign to it the aux queue */
        ret = iwl_mvm_allocate_int_sta(mvm, &mvm->aux_sta, BIT(mvm->aux_queue),
@@ -992,6 +1431,19 @@ int iwl_mvm_add_aux_sta(struct iwl_mvm *mvm)
        if (ret)
                return ret;
 
+       if (iwl_mvm_is_dqa_supported(mvm)) {
+               struct iwl_trans_txq_scd_cfg cfg = {
+                       .fifo = IWL_MVM_TX_FIFO_MCAST,
+                       .sta_id = mvm->aux_sta.sta_id,
+                       .tid = IWL_MAX_TID_COUNT,
+                       .aggregate = false,
+                       .frame_limit = IWL_FRAME_LIMIT,
+               };
+
+               iwl_mvm_enable_txq(mvm, mvm->aux_queue, mvm->aux_queue, 0, &cfg,
+                                  wdg_timeout);
+       }
+
        ret = iwl_mvm_add_int_sta_common(mvm, &mvm->aux_sta, NULL,
                                         MAC_INDEX_AUX, 0);
 
@@ -1316,8 +1768,8 @@ int iwl_mvm_sta_rx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
 
        switch (status & IWL_ADD_STA_STATUS_MASK) {
        case ADD_STA_SUCCESS:
-               IWL_DEBUG_INFO(mvm, "RX BA Session %sed in fw\n",
-                              start ? "start" : "stopp");
+               IWL_DEBUG_HT(mvm, "RX BA Session %sed in fw\n",
+                            start ? "start" : "stopp");
                break;
        case ADD_STA_IMMEDIATE_BA_FAILURE:
                IWL_WARN(mvm, "RX BA Session refused by fw\n");
@@ -1372,13 +1824,16 @@ int iwl_mvm_sta_rx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
                 * supposed to happen) and we will free the session data while
                 * RX is being processed in parallel
                 */
+               IWL_DEBUG_HT(mvm, "Sta %d(%d) is assigned to BAID %d\n",
+                            mvm_sta->sta_id, tid, baid);
                WARN_ON(rcu_access_pointer(mvm->baid_map[baid]));
                rcu_assign_pointer(mvm->baid_map[baid], baid_data);
-       } else if (mvm->rx_ba_sessions > 0) {
+       } else  {
                u8 baid = mvm_sta->tid_to_baid[tid];
 
-               /* check that restart flow didn't zero the counter */
-               mvm->rx_ba_sessions--;
+               if (mvm->rx_ba_sessions > 0)
+                       /* check that restart flow didn't zero the counter */
+                       mvm->rx_ba_sessions--;
                if (!iwl_mvm_has_new_rx_api(mvm))
                        return 0;
 
@@ -1394,6 +1849,7 @@ int iwl_mvm_sta_rx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
                del_timer_sync(&baid_data->session_timer);
                RCU_INIT_POINTER(mvm->baid_map[baid], NULL);
                kfree_rcu(baid_data, rcu_head);
+               IWL_DEBUG_HT(mvm, "BAID %d is free\n", baid);
        }
        return 0;
 
@@ -1402,8 +1858,8 @@ out_free:
        return ret;
 }
 
-static int iwl_mvm_sta_tx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
-                             int tid, u8 queue, bool start)
+int iwl_mvm_sta_tx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+                      int tid, u8 queue, bool start)
 {
        struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
        struct iwl_mvm_add_sta_cmd cmd = {};
@@ -1458,6 +1914,7 @@ const u8 tid_to_mac80211_ac[] = {
        IEEE80211_AC_VI,
        IEEE80211_AC_VO,
        IEEE80211_AC_VO,
+       IEEE80211_AC_VO, /* We treat MGMT as TID 8, which is set as AC_VO */
 };
 
 static const u8 tid_to_ucode_ac[] = {
@@ -1512,7 +1969,8 @@ int iwl_mvm_sta_tx_agg_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
        txq_id = mvmsta->tid_data[tid].txq_id;
        if (!iwl_mvm_is_dqa_supported(mvm) ||
            mvm->queue_info[txq_id].status != IWL_MVM_QUEUE_READY) {
-               txq_id = iwl_mvm_find_free_queue(mvm, mvm->first_agg_queue,
+               txq_id = iwl_mvm_find_free_queue(mvm, mvmsta->sta_id,
+                                                mvm->first_agg_queue,
                                                 mvm->last_agg_queue);
                if (txq_id < 0) {
                        ret = txq_id;
@@ -1907,6 +2365,13 @@ static int iwl_mvm_send_sta_key(struct iwl_mvm *mvm,
                key_flags |= cpu_to_le16(STA_KEY_FLG_WEP);
                memcpy(cmd.key + 3, keyconf->key, keyconf->keylen);
                break;
+       case WLAN_CIPHER_SUITE_GCMP_256:
+               key_flags |= cpu_to_le16(STA_KEY_FLG_KEY_32BYTES);
+               /* fall through */
+       case WLAN_CIPHER_SUITE_GCMP:
+               key_flags |= cpu_to_le16(STA_KEY_FLG_GCMP);
+               memcpy(cmd.key, keyconf->key, keyconf->keylen);
+               break;
        default:
                key_flags |= cpu_to_le16(STA_KEY_FLG_EXT);
                memcpy(cmd.key, keyconf->key, keyconf->keylen);
@@ -2035,6 +2500,8 @@ static int __iwl_mvm_set_sta_key(struct iwl_mvm *mvm,
        case WLAN_CIPHER_SUITE_CCMP:
        case WLAN_CIPHER_SUITE_WEP40:
        case WLAN_CIPHER_SUITE_WEP104:
+       case WLAN_CIPHER_SUITE_GCMP:
+       case WLAN_CIPHER_SUITE_GCMP_256:
                ret = iwl_mvm_send_sta_key(mvm, mvm_sta, keyconf, mcast,
                                           0, NULL, 0, key_offset);
                break;
index d2c58f1..bbc1cab 100644 (file)
@@ -321,6 +321,9 @@ enum iwl_mvm_agg_state {
  *     Basically when next_reclaimed reaches ssn, we can tell mac80211 that
  *     we are ready to finish the Tx AGG stop / start flow.
  * @tx_time: medium time consumed by this A-MPDU
+ * @is_tid_active: has this TID sent traffic in the last
+ *     %IWL_MVM_DQA_QUEUE_TIMEOUT time period. If %txq_id is invalid, this
+ *     field should be ignored.
  */
 struct iwl_mvm_tid_data {
        struct sk_buff_head deferred_tx_frames;
@@ -333,6 +336,7 @@ struct iwl_mvm_tid_data {
        u16 txq_id;
        u16 ssn;
        u16 tx_time;
+       bool is_tid_active;
 };
 
 static inline u16 iwl_mvm_tid_queued(struct iwl_mvm_tid_data *tid_data)
@@ -434,6 +438,7 @@ struct iwl_mvm_sta {
        bool tlc_amsdu;
        u8 agg_tids;
        u8 sleep_tx_count;
+       u8 avg_energy;
 };
 
 static inline struct iwl_mvm_sta *
@@ -509,6 +514,9 @@ int iwl_mvm_sta_tx_agg_stop(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
 int iwl_mvm_sta_tx_agg_flush(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
                            struct ieee80211_sta *sta, u16 tid);
 
+int iwl_mvm_sta_tx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+                      int tid, u8 queue, bool start);
+
 int iwl_mvm_add_aux_sta(struct iwl_mvm *mvm);
 void iwl_mvm_del_aux_sta(struct iwl_mvm *mvm);
 
index 779bafc..c6585ab 100644 (file)
@@ -138,28 +138,19 @@ static void iwl_mvm_tx_csum(struct iwl_mvm *mvm, struct sk_buff *skb,
 
                protocol = ipv6h->nexthdr;
                while (protocol != NEXTHDR_NONE && ipv6_ext_hdr(protocol)) {
+                       struct ipv6_opt_hdr *hp;
+
                        /* only supported extension headers */
                        if (protocol != NEXTHDR_ROUTING &&
                            protocol != NEXTHDR_HOP &&
-                           protocol != NEXTHDR_DEST &&
-                           protocol != NEXTHDR_FRAGMENT) {
+                           protocol != NEXTHDR_DEST) {
                                skb_checksum_help(skb);
                                return;
                        }
 
-                       if (protocol == NEXTHDR_FRAGMENT) {
-                               struct frag_hdr *hp =
-                                       OPT_HDR(struct frag_hdr, skb, off);
-
-                               protocol = hp->nexthdr;
-                               off += sizeof(struct frag_hdr);
-                       } else {
-                               struct ipv6_opt_hdr *hp =
-                                       OPT_HDR(struct ipv6_opt_hdr, skb, off);
-
-                               protocol = hp->nexthdr;
-                               off += ipv6_optlen(hp);
-                       }
+                       hp = OPT_HDR(struct ipv6_opt_hdr, skb, off);
+                       protocol = hp->nexthdr;
+                       off += ipv6_optlen(hp);
                }
                /* if we get here - protocol now should be TCP/UDP */
 #endif
@@ -388,6 +379,23 @@ void iwl_mvm_set_tx_cmd_rate(struct iwl_mvm *mvm, struct iwl_tx_cmd *tx_cmd,
        tx_cmd->rate_n_flags = cpu_to_le32((u32)rate_plcp | rate_flags);
 }
 
+static inline void iwl_mvm_set_tx_cmd_pn(struct ieee80211_tx_info *info,
+                                        u8 *crypto_hdr)
+{
+       struct ieee80211_key_conf *keyconf = info->control.hw_key;
+       u64 pn;
+
+       pn = atomic64_inc_return(&keyconf->tx_pn);
+       crypto_hdr[0] = pn;
+       crypto_hdr[2] = 0;
+       crypto_hdr[3] = 0x20 | (keyconf->keyidx << 6);
+       crypto_hdr[1] = pn >> 8;
+       crypto_hdr[4] = pn >> 16;
+       crypto_hdr[5] = pn >> 24;
+       crypto_hdr[6] = pn >> 32;
+       crypto_hdr[7] = pn >> 40;
+}
+
 /*
  * Sets the fields in the Tx cmd that are crypto related
  */
@@ -405,15 +413,7 @@ static void iwl_mvm_set_tx_cmd_crypto(struct iwl_mvm *mvm,
        case WLAN_CIPHER_SUITE_CCMP:
        case WLAN_CIPHER_SUITE_CCMP_256:
                iwl_mvm_set_tx_cmd_ccmp(info, tx_cmd);
-               pn = atomic64_inc_return(&keyconf->tx_pn);
-               crypto_hdr[0] = pn;
-               crypto_hdr[2] = 0;
-               crypto_hdr[3] = 0x20 | (keyconf->keyidx << 6);
-               crypto_hdr[1] = pn >> 8;
-               crypto_hdr[4] = pn >> 16;
-               crypto_hdr[5] = pn >> 24;
-               crypto_hdr[6] = pn >> 32;
-               crypto_hdr[7] = pn >> 40;
+               iwl_mvm_set_tx_cmd_pn(info, crypto_hdr);
                break;
 
        case WLAN_CIPHER_SUITE_TKIP:
@@ -433,6 +433,18 @@ static void iwl_mvm_set_tx_cmd_crypto(struct iwl_mvm *mvm,
 
                memcpy(&tx_cmd->key[3], keyconf->key, keyconf->keylen);
                break;
+       case WLAN_CIPHER_SUITE_GCMP:
+       case WLAN_CIPHER_SUITE_GCMP_256:
+               /* TODO: Taking the key from the table might introduce a race
+                * when PTK rekeying is done, having an old packets with a PN
+                * based on the old key but the message encrypted with a new
+                * one.
+                * Need to handle this.
+                */
+               tx_cmd->sec_ctl |= TX_CMD_SEC_GCMP | TC_CMD_SEC_KEY_FROM_TABLE;
+               tx_cmd->key[0] = keyconf->hw_key_idx;
+               iwl_mvm_set_tx_cmd_pn(info, crypto_hdr);
+               break;
        default:
                tx_cmd->sec_ctl |= TX_CMD_SEC_EXT;
        }
@@ -534,6 +546,9 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb)
         * (this is not possible for unicast packets as a TLDS discovery
         * response are sent without a station entry); otherwise use the
         * AUX station.
+        * In DQA mode, if vif is of type STATION and frames are not multicast,
+        * they should be sent from the BSS queue. For example, TDLS setup
+        * frames should be sent on this queue, as they go through the AP.
         */
        sta_id = mvm->aux_sta.sta_id;
        if (info.control.vif) {
@@ -551,6 +566,9 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb)
 
                        if (ap_sta_id != IWL_MVM_STATION_COUNT)
                                sta_id = ap_sta_id;
+               } else if (iwl_mvm_is_dqa_supported(mvm) &&
+                          info.control.vif->type == NL80211_IFTYPE_STATION) {
+                       queue = IWL_MVM_DQA_BSS_CLIENT_QUEUE;
                }
        }
 
@@ -884,15 +902,17 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
                 * nullfunc frames should go to the MGMT queue regardless of QOS
                 */
                tid = IWL_MAX_TID_COUNT;
-               txq_id = mvmsta->tid_data[tid].txq_id;
        }
 
+       if (iwl_mvm_is_dqa_supported(mvm))
+               txq_id = mvmsta->tid_data[tid].txq_id;
+
        /* Copy MAC header from skb into command buffer */
        memcpy(tx_cmd->hdr, hdr, hdrlen);
 
        WARN_ON_ONCE(info->flags & IEEE80211_TX_CTL_SEND_AFTER_DTIM);
 
-       if (sta->tdls) {
+       if (sta->tdls && !iwl_mvm_is_dqa_supported(mvm)) {
                /* default to TID 0 for non-QoS packets */
                u8 tdls_tid = tid == IWL_MAX_TID_COUNT ? 0 : tid;
 
@@ -905,9 +925,12 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
                txq_id = mvmsta->tid_data[tid].txq_id;
        }
 
-       if (iwl_mvm_is_dqa_supported(mvm)) {
-               if (unlikely(mvmsta->tid_data[tid].txq_id ==
-                            IEEE80211_INVAL_HW_QUEUE)) {
+       /* Check if TXQ needs to be allocated or re-activated */
+       if (unlikely(txq_id == IEEE80211_INVAL_HW_QUEUE ||
+                    !mvmsta->tid_data[tid].is_tid_active) &&
+           iwl_mvm_is_dqa_supported(mvm)) {
+               /* If TXQ needs to be allocated... */
+               if (txq_id == IEEE80211_INVAL_HW_QUEUE) {
                        iwl_mvm_tx_add_stream(mvm, mvmsta, tid, skb);
 
                        /*
@@ -917,11 +940,22 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
                        iwl_trans_free_tx_cmd(mvm->trans, dev_cmd);
                        spin_unlock(&mvmsta->lock);
                        return 0;
+
                }
 
-               txq_id = mvmsta->tid_data[tid].txq_id;
+               /* If we are here - TXQ exists and needs to be re-activated */
+               spin_lock(&mvm->queue_info_lock);
+               mvm->queue_info[txq_id].status = IWL_MVM_QUEUE_READY;
+               mvmsta->tid_data[tid].is_tid_active = true;
+               spin_unlock(&mvm->queue_info_lock);
+
+               IWL_DEBUG_TX_QUEUES(mvm, "Re-activating queue %d for TX\n",
+                                   txq_id);
        }
 
+       /* Keep track of the time of the last frame for this RA/TID */
+       mvm->queue_info[txq_id].last_frame_time[tid] = jiffies;
+
        IWL_DEBUG_TX(mvm, "TX to [%d|%d] Q:%d - seq: 0x%x\n", mvmsta->sta_id,
                     tid, txq_id, IEEE80211_SEQ_TO_SN(seq_number));
 
@@ -1313,7 +1347,15 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
                        bool send_eosp_ndp = false;
 
                        spin_lock_bh(&mvmsta->lock);
-                       txq_agg = (mvmsta->tid_data[tid].state == IWL_AGG_ON);
+                       if (iwl_mvm_is_dqa_supported(mvm)) {
+                               enum iwl_mvm_agg_state state;
+
+                               state = mvmsta->tid_data[tid].state;
+                               txq_agg = (state == IWL_AGG_ON ||
+                                       state == IWL_EMPTYING_HW_QUEUE_DELBA);
+                       } else {
+                               txq_agg = txq_id >= mvm->first_agg_queue;
+                       }
 
                        if (!is_ndp) {
                                tid_data->next_reclaimed = next_reclaimed;
@@ -1644,7 +1686,7 @@ void iwl_mvm_rx_ba_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
                iwl_mvm_tx_info_from_ba_notif(&ba_info, ba_notif, tid_data);
 
                IWL_DEBUG_TX_REPLY(mvm, "No reclaim. Update rs directly\n");
-               iwl_mvm_rs_tx_status(mvm, sta, tid, &ba_info);
+               iwl_mvm_rs_tx_status(mvm, sta, tid, &ba_info, false);
        }
 
 out:
index 161b99e..68f4e7f 100644 (file)
@@ -579,17 +579,29 @@ void iwl_mvm_dump_nic_error_log(struct iwl_mvm *mvm)
                iwl_mvm_dump_umac_error_log(mvm);
 }
 
-int iwl_mvm_find_free_queue(struct iwl_mvm *mvm, u8 minq, u8 maxq)
+int iwl_mvm_find_free_queue(struct iwl_mvm *mvm, u8 sta_id, u8 minq, u8 maxq)
 {
        int i;
 
        lockdep_assert_held(&mvm->queue_info_lock);
 
+       /* Start by looking for a free queue */
        for (i = minq; i <= maxq; i++)
                if (mvm->queue_info[i].hw_queue_refcount == 0 &&
                    mvm->queue_info[i].status == IWL_MVM_QUEUE_FREE)
                        return i;
 
+       /*
+        * If no free queue found - settle for an inactive one to reconfigure
+        * Make sure that the inactive queue either already belongs to this STA,
+        * or that if it belongs to another one - it isn't the reserved queue
+        */
+       for (i = minq; i <= maxq; i++)
+               if (mvm->queue_info[i].status == IWL_MVM_QUEUE_INACTIVE &&
+                   (sta_id == mvm->queue_info[i].ra_sta_id ||
+                    !mvm->queue_info[i].reserved))
+                       return i;
+
        return -ENOSPC;
 }
 
@@ -643,13 +655,21 @@ void iwl_mvm_enable_txq(struct iwl_mvm *mvm, int queue, int mac80211_queue,
        }
 
        /* Update mappings and refcounts */
+       if (mvm->queue_info[queue].hw_queue_refcount > 0)
+               enable_queue = false;
+
        mvm->queue_info[queue].hw_queue_to_mac80211 |= BIT(mac80211_queue);
        mvm->queue_info[queue].hw_queue_refcount++;
-       if (mvm->queue_info[queue].hw_queue_refcount > 1)
-               enable_queue = false;
-       else
-               mvm->queue_info[queue].ra_sta_id = cfg->sta_id;
        mvm->queue_info[queue].tid_bitmap |= BIT(cfg->tid);
+       mvm->queue_info[queue].ra_sta_id = cfg->sta_id;
+
+       if (enable_queue) {
+               if (cfg->tid != IWL_MAX_TID_COUNT)
+                       mvm->queue_info[queue].mac80211_ac =
+                               tid_to_mac80211_ac[cfg->tid];
+               else
+                       mvm->queue_info[queue].mac80211_ac = IEEE80211_AC_VO;
+       }
 
        IWL_DEBUG_TX_QUEUES(mvm,
                            "Enabling TXQ #%d refcount=%d (mac80211 map:0x%x)\n",
@@ -671,6 +691,10 @@ void iwl_mvm_enable_txq(struct iwl_mvm *mvm, int queue, int mac80211_queue,
                        .tid = cfg->tid,
                };
 
+               /* Set sta_id in the command, if it exists */
+               if (iwl_mvm_is_dqa_supported(mvm))
+                       cmd.sta_id = cfg->sta_id;
+
                iwl_trans_txq_enable_cfg(mvm->trans, queue, ssn, NULL,
                                         wdg_timeout);
                WARN(iwl_mvm_send_cmd_pdu(mvm, SCD_QUEUE_CFG, 0, sizeof(cmd),
@@ -752,6 +776,9 @@ void iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, int mac80211_queue,
        mvm->queue_info[queue].tid_bitmap = 0;
        mvm->queue_info[queue].hw_queue_to_mac80211 = 0;
 
+       /* Regardless if this is a reserved TXQ for a STA - mark it as false */
+       mvm->queue_info[queue].reserved = false;
+
        spin_unlock_bh(&mvm->queue_info_lock);
 
        iwl_trans_txq_disable(mvm->trans, queue, false);
@@ -1039,6 +1066,155 @@ out:
        ieee80211_connection_loss(vif);
 }
 
+/*
+ * Remove inactive TIDs of a given queue.
+ * If all queue TIDs are inactive - mark the queue as inactive
+ * If only some the queue TIDs are inactive - unmap them from the queue
+ */
+static void iwl_mvm_remove_inactive_tids(struct iwl_mvm *mvm,
+                                        struct iwl_mvm_sta *mvmsta, int queue,
+                                        unsigned long tid_bitmap)
+{
+       int tid;
+
+       lockdep_assert_held(&mvmsta->lock);
+       lockdep_assert_held(&mvm->queue_info_lock);
+
+       /* Go over all non-active TIDs, incl. IWL_MAX_TID_COUNT (for mgmt) */
+       for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) {
+               /* If some TFDs are still queued - don't mark TID as inactive */
+               if (iwl_mvm_tid_queued(&mvmsta->tid_data[tid]))
+                       tid_bitmap &= ~BIT(tid);
+       }
+
+       /* If all TIDs in the queue are inactive - mark queue as inactive. */
+       if (tid_bitmap == mvm->queue_info[queue].tid_bitmap) {
+               mvm->queue_info[queue].status = IWL_MVM_QUEUE_INACTIVE;
+
+               for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1)
+                       mvmsta->tid_data[tid].is_tid_active = false;
+
+               IWL_DEBUG_TX_QUEUES(mvm, "Queue %d marked as inactive\n",
+                                   queue);
+               return;
+       }
+
+       /*
+        * If we are here, this is a shared queue and not all TIDs timed-out.
+        * Remove the ones that did.
+        */
+       for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) {
+               int mac_queue = mvmsta->vif->hw_queue[tid_to_mac80211_ac[tid]];
+
+               mvmsta->tid_data[tid].txq_id = IEEE80211_INVAL_HW_QUEUE;
+               mvm->queue_info[queue].hw_queue_to_mac80211 &= ~BIT(mac_queue);
+               mvm->queue_info[queue].hw_queue_refcount--;
+               mvm->queue_info[queue].tid_bitmap &= ~BIT(tid);
+               mvmsta->tid_data[tid].is_tid_active = false;
+
+               IWL_DEBUG_TX_QUEUES(mvm,
+                                   "Removing inactive TID %d from shared Q:%d\n",
+                                   tid, queue);
+       }
+
+       IWL_DEBUG_TX_QUEUES(mvm,
+                           "TXQ #%d left with tid bitmap 0x%x\n", queue,
+                           mvm->queue_info[queue].tid_bitmap);
+
+       /*
+        * There may be different TIDs with the same mac queues, so make
+        * sure all TIDs have existing corresponding mac queues enabled
+        */
+       tid_bitmap = mvm->queue_info[queue].tid_bitmap;
+       for_each_set_bit(tid, &tid_bitmap, IWL_MAX_TID_COUNT + 1) {
+               mvm->queue_info[queue].hw_queue_to_mac80211 |=
+                       BIT(mvmsta->vif->hw_queue[tid_to_mac80211_ac[tid]]);
+       }
+
+       /* TODO: if queue was shared - need to re-enable AGGs */
+}
+
+void iwl_mvm_inactivity_check(struct iwl_mvm *mvm)
+{
+       unsigned long timeout_queues_map = 0;
+       unsigned long now = jiffies;
+       int i;
+
+       spin_lock_bh(&mvm->queue_info_lock);
+       for (i = 0; i < IWL_MAX_HW_QUEUES; i++)
+               if (mvm->queue_info[i].hw_queue_refcount > 0)
+                       timeout_queues_map |= BIT(i);
+       spin_unlock_bh(&mvm->queue_info_lock);
+
+       rcu_read_lock();
+
+       /*
+        * If a queue time outs - mark it as INACTIVE (don't remove right away
+        * if we don't have to.) This is an optimization in case traffic comes
+        * later, and we don't HAVE to use a currently-inactive queue
+        */
+       for_each_set_bit(i, &timeout_queues_map, IWL_MAX_HW_QUEUES) {
+               struct ieee80211_sta *sta;
+               struct iwl_mvm_sta *mvmsta;
+               u8 sta_id;
+               int tid;
+               unsigned long inactive_tid_bitmap = 0;
+               unsigned long queue_tid_bitmap;
+
+               spin_lock_bh(&mvm->queue_info_lock);
+               queue_tid_bitmap = mvm->queue_info[i].tid_bitmap;
+
+               /* If TXQ isn't in active use anyway - nothing to do here... */
+               if (mvm->queue_info[i].status != IWL_MVM_QUEUE_READY &&
+                   mvm->queue_info[i].status != IWL_MVM_QUEUE_SHARED) {
+                       spin_unlock_bh(&mvm->queue_info_lock);
+                       continue;
+               }
+
+               /* Check to see if there are inactive TIDs on this queue */
+               for_each_set_bit(tid, &queue_tid_bitmap,
+                                IWL_MAX_TID_COUNT + 1) {
+                       if (time_after(mvm->queue_info[i].last_frame_time[tid] +
+                                      IWL_MVM_DQA_QUEUE_TIMEOUT, now))
+                               continue;
+
+                       inactive_tid_bitmap |= BIT(tid);
+               }
+               spin_unlock_bh(&mvm->queue_info_lock);
+
+               /* If all TIDs are active - finish check on this queue */
+               if (!inactive_tid_bitmap)
+                       continue;
+
+               /*
+                * If we are here - the queue hadn't been served recently and is
+                * in use
+                */
+
+               sta_id = mvm->queue_info[i].ra_sta_id;
+               sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_id]);
+
+               /*
+                * If the STA doesn't exist anymore, it isn't an error. It could
+                * be that it was removed since getting the queues, and in this
+                * case it should've inactivated its queues anyway.
+                */
+               if (IS_ERR_OR_NULL(sta))
+                       continue;
+
+               mvmsta = iwl_mvm_sta_from_mac80211(sta);
+
+               spin_lock_bh(&mvmsta->lock);
+               spin_lock(&mvm->queue_info_lock);
+               iwl_mvm_remove_inactive_tids(mvm, mvmsta, i,
+                                            inactive_tid_bitmap);
+               spin_unlock(&mvm->queue_info_lock);
+               spin_unlock_bh(&mvmsta->lock);
+       }
+
+       rcu_read_unlock();
+}
+
 int iwl_mvm_send_lqm_cmd(struct ieee80211_vif *vif,
                         enum iwl_lqm_cmd_operatrions operation,
                         u32 duration, u32 timeout)
index a588b05..78cf9a7 100644 (file)
@@ -433,6 +433,7 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
 /* 8000 Series */
        {IWL_PCI_DEVICE(0x24F3, 0x0010, iwl8260_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24F3, 0x1010, iwl8260_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24F3, 0x10B0, iwl8260_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24F3, 0x0130, iwl8260_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24F3, 0x1130, iwl8260_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24F3, 0x0132, iwl8260_2ac_cfg)},
@@ -454,6 +455,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
        {IWL_PCI_DEVICE(0x24F3, 0xD010, iwl8260_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24F3, 0xC050, iwl8260_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24F3, 0xD050, iwl8260_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24F3, 0xD0B0, iwl8260_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24F3, 0xB0B0, iwl8260_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24F3, 0x8010, iwl8260_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24F3, 0x8110, iwl8260_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24F3, 0x9010, iwl8260_2ac_cfg)},
@@ -481,6 +484,8 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
        {IWL_PCI_DEVICE(0x24FD, 0x0010, iwl8265_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24FD, 0x0110, iwl8265_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24FD, 0x1110, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x1130, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x0130, iwl8265_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24FD, 0x1010, iwl8265_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24FD, 0x0050, iwl8265_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24FD, 0x0150, iwl8265_2ac_cfg)},
@@ -491,6 +496,10 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
        {IWL_PCI_DEVICE(0x24FD, 0x0810, iwl8265_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24FD, 0x9110, iwl8265_2ac_cfg)},
        {IWL_PCI_DEVICE(0x24FD, 0x8130, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x0910, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x0930, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x0950, iwl8265_2ac_cfg)},
+       {IWL_PCI_DEVICE(0x24FD, 0x0850, iwl8265_2ac_cfg)},
 
 /* 9000 Series */
        {IWL_PCI_DEVICE(0x2526, 0x0000, iwl9260_2ac_cfg)},
@@ -507,6 +516,9 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
        {IWL_PCI_DEVICE(0x2526, 0x1420, iwl5165_2ac_cfg)},
        {IWL_PCI_DEVICE(0x9DF0, 0x0710, iwl5165_2ac_cfg)},
        {IWL_PCI_DEVICE(0x9DF0, 0x2A10, iwl5165_2ac_cfg)},
+
+/* a000 Series */
+       {IWL_PCI_DEVICE(0x2720, 0x0A10, iwla000_2ac_cfg)},
 #endif /* CONFIG_IWLMVM */
 
        {0}
@@ -598,7 +610,6 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
        const struct iwl_cfg *cfg_7265d __maybe_unused = NULL;
        const struct iwl_cfg *cfg_9260lc __maybe_unused = NULL;
        struct iwl_trans *iwl_trans;
-       struct iwl_trans_pcie *trans_pcie;
        int ret;
 
        iwl_trans = iwl_trans_pcie_alloc(pdev, ent, cfg);
@@ -636,12 +647,10 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 #endif
 
        pci_set_drvdata(pdev, iwl_trans);
+       iwl_trans->drv = iwl_drv_start(iwl_trans, cfg);
 
-       trans_pcie = IWL_TRANS_GET_PCIE_TRANS(iwl_trans);
-       trans_pcie->drv = iwl_drv_start(iwl_trans, cfg);
-
-       if (IS_ERR(trans_pcie->drv)) {
-               ret = PTR_ERR(trans_pcie->drv);
+       if (IS_ERR(iwl_trans->drv)) {
+               ret = PTR_ERR(iwl_trans->drv);
                goto out_free_trans;
        }
 
@@ -680,7 +689,7 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
        return 0;
 
 out_free_drv:
-       iwl_drv_stop(trans_pcie->drv);
+       iwl_drv_stop(iwl_trans->drv);
 out_free_trans:
        iwl_trans_pcie_free(iwl_trans);
        return ret;
@@ -689,7 +698,6 @@ out_free_trans:
 static void iwl_pci_remove(struct pci_dev *pdev)
 {
        struct iwl_trans *trans = pci_get_drvdata(pdev);
-       struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
 
        /* if RTPM was in use, restore it to the state before probe */
        if (trans->runtime_pm_mode != IWL_PLAT_PM_MODE_DISABLED) {
@@ -700,7 +708,7 @@ static void iwl_pci_remove(struct pci_dev *pdev)
                pm_runtime_forbid(trans->dev);
        }
 
-       iwl_drv_stop(trans_pcie->drv);
+       iwl_drv_stop(trans->drv);
 
        iwl_trans_pcie_free(trans);
 }
index de6974f..11e347d 100644 (file)
@@ -68,12 +68,14 @@ struct iwl_host_cmd;
  * struct iwl_rx_mem_buffer
  * @page_dma: bus address of rxb page
  * @page: driver's pointer to the rxb page
+ * @invalid: rxb is in driver ownership - not owned by HW
  * @vid: index of this rxb in the global table
  */
 struct iwl_rx_mem_buffer {
        dma_addr_t page_dma;
        struct page *page;
        u16 vid;
+       bool invalid;
        struct list_head list;
 };
 
@@ -230,15 +232,16 @@ struct iwl_queue {
 #define TFD_CMD_SLOTS 32
 
 /*
- * The FH will write back to the first TB only, so we need
- * to copy some data into the buffer regardless of whether
- * it should be mapped or not. This indicates how big the
- * first TB must be to include the scratch buffer. Since
- * the scratch is 4 bytes at offset 12, it's 16 now. If we
- * make it bigger then allocations will be bigger and copy
- * slower, so that's probably not useful.
+ * The FH will write back to the first TB only, so we need to copy some data
+ * into the buffer regardless of whether it should be mapped or not.
+ * This indicates how big the first TB must be to include the scratch buffer
+ * and the assigned PN.
+ * Since PN location is 16 bytes at offset 24, it's 40 now.
+ * If we make it bigger then allocations will be bigger and copy slower, so
+ * that's probably not useful.
  */
-#define IWL_HCMD_SCRATCHBUF_SIZE       16
+#define IWL_FIRST_TB_SIZE      40
+#define IWL_FIRST_TB_SIZE_ALIGN ALIGN(IWL_FIRST_TB_SIZE, 64)
 
 struct iwl_pcie_txq_entry {
        struct iwl_device_cmd *cmd;
@@ -248,20 +251,18 @@ struct iwl_pcie_txq_entry {
        struct iwl_cmd_meta meta;
 };
 
-struct iwl_pcie_txq_scratch_buf {
-       struct iwl_cmd_header hdr;
-       u8 buf[8];
-       __le32 scratch;
+struct iwl_pcie_first_tb_buf {
+       u8 buf[IWL_FIRST_TB_SIZE_ALIGN];
 };
 
 /**
  * struct iwl_txq - Tx Queue for DMA
  * @q: generic Rx/Tx queue descriptor
  * @tfds: transmit frame descriptors (DMA memory)
- * @scratchbufs: start of command headers, including scratch buffers, for
+ * @first_tb_bufs: start of command headers, including scratch buffers, for
  *     the writeback -- this is DMA memory and an array holding one buffer
  *     for each command on the queue
- * @scratchbufs_dma: DMA address for the scratchbufs start
+ * @first_tb_dma: DMA address for the first_tb_bufs start
  * @entries: transmit entries (driver state)
  * @lock: queue lock
  * @stuck_timer: timer that fires if queue gets stuck
@@ -279,8 +280,8 @@ struct iwl_pcie_txq_scratch_buf {
 struct iwl_txq {
        struct iwl_queue q;
        struct iwl_tfd *tfds;
-       struct iwl_pcie_txq_scratch_buf *scratchbufs;
-       dma_addr_t scratchbufs_dma;
+       struct iwl_pcie_first_tb_buf *first_tb_bufs;
+       dma_addr_t first_tb_dma;
        struct iwl_pcie_txq_entry *entries;
        spinlock_t lock;
        unsigned long frozen_expiry_remainder;
@@ -296,10 +297,10 @@ struct iwl_txq {
 };
 
 static inline dma_addr_t
-iwl_pcie_get_scratchbuf_dma(struct iwl_txq *txq, int idx)
+iwl_pcie_get_first_tb_dma(struct iwl_txq *txq, int idx)
 {
-       return txq->scratchbufs_dma +
-              sizeof(struct iwl_pcie_txq_scratch_buf) * idx;
+       return txq->first_tb_dma +
+              sizeof(struct iwl_pcie_first_tb_buf) * idx;
 }
 
 struct iwl_tso_hdr_page {
@@ -313,7 +314,6 @@ struct iwl_tso_hdr_page {
  * @rx_pool: initial pool of iwl_rx_mem_buffer for all the queues
  * @global_table: table mapping received VID from hw to rxb
  * @rba: allocator for RX replenishing
- * @drv - pointer to iwl_drv
  * @trans: pointer to the generic transport area
  * @scd_base_addr: scheduler sram base address in SRAM
  * @scd_bc_tbls: pointer to the byte count table of the scheduler
@@ -351,7 +351,6 @@ struct iwl_trans_pcie {
        struct iwl_rx_mem_buffer *global_table[RX_POOL_SIZE];
        struct iwl_rb_allocator rba;
        struct iwl_trans *trans;
-       struct iwl_drv *drv;
 
        struct net_device napi_dev;
 
@@ -385,6 +384,8 @@ struct iwl_trans_pcie {
        wait_queue_head_t wait_command_queue;
        wait_queue_head_t d0i3_waitq;
 
+       u8 page_offs, dev_cmd_offs;
+
        u8 cmd_queue;
        u8 cmd_fifo;
        unsigned int cmd_q_wdg_timeout;
@@ -471,6 +472,10 @@ void iwl_trans_pcie_txq_enable(struct iwl_trans *trans, int queue, u16 ssn,
                               unsigned int wdg_timeout);
 void iwl_trans_pcie_txq_disable(struct iwl_trans *trans, int queue,
                                bool configure_scd);
+void iwl_trans_pcie_txq_set_shared_mode(struct iwl_trans *trans, u32 txq_id,
+                                       bool shared_mode);
+void iwl_trans_pcie_log_scd_error(struct iwl_trans *trans,
+                                 struct iwl_txq *txq);
 int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
                      struct iwl_device_cmd *dev_cmd, int txq_id);
 void iwl_pcie_txq_check_wrptrs(struct iwl_trans *trans);
@@ -496,7 +501,7 @@ void iwl_pcie_dump_csr(struct iwl_trans *trans);
 /*****************************************************
 * Helpers
 ******************************************************/
-static inline void iwl_disable_interrupts(struct iwl_trans *trans)
+static inline void _iwl_disable_interrupts(struct iwl_trans *trans)
 {
        struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
 
@@ -519,7 +524,16 @@ static inline void iwl_disable_interrupts(struct iwl_trans *trans)
        IWL_DEBUG_ISR(trans, "Disabled interrupts\n");
 }
 
-static inline void iwl_enable_interrupts(struct iwl_trans *trans)
+static inline void iwl_disable_interrupts(struct iwl_trans *trans)
+{
+       struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+
+       spin_lock(&trans_pcie->irq_lock);
+       _iwl_disable_interrupts(trans);
+       spin_unlock(&trans_pcie->irq_lock);
+}
+
+static inline void _iwl_enable_interrupts(struct iwl_trans *trans)
 {
        struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
 
@@ -542,6 +556,14 @@ static inline void iwl_enable_interrupts(struct iwl_trans *trans)
        }
 }
 
+static inline void iwl_enable_interrupts(struct iwl_trans *trans)
+{
+       struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+
+       spin_lock(&trans_pcie->irq_lock);
+       _iwl_enable_interrupts(trans);
+       spin_unlock(&trans_pcie->irq_lock);
+}
 static inline void iwl_enable_hw_int_msk_msix(struct iwl_trans *trans, u32 msk)
 {
        struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
@@ -673,4 +695,6 @@ static inline int iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans)
 int iwl_pci_fw_exit_d0i3(struct iwl_trans *trans);
 int iwl_pci_fw_enter_d0i3(struct iwl_trans *trans);
 
+void iwl_pcie_enable_rx_wake(struct iwl_trans *trans, bool enable);
+
 #endif /* __iwl_trans_int_pcie_h__ */
index 0a4a3c5..5c36e6d 100644 (file)
@@ -161,21 +161,21 @@ static inline __le32 iwl_pcie_dma_addr2rbd_ptr(dma_addr_t dma_addr)
        return cpu_to_le32((u32)(dma_addr >> 8));
 }
 
-static void iwl_pcie_write_prph_64_no_grab(struct iwl_trans *trans, u64 ofs,
-                                          u64 val)
-{
-       iwl_write_prph_no_grab(trans, ofs, val & 0xffffffff);
-       iwl_write_prph_no_grab(trans, ofs + 4, val >> 32);
-}
-
 /*
  * iwl_pcie_rx_stop - stops the Rx DMA
  */
 int iwl_pcie_rx_stop(struct iwl_trans *trans)
 {
-       iwl_write_direct32(trans, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
-       return iwl_poll_direct_bit(trans, FH_MEM_RSSR_RX_STATUS_REG,
-                                  FH_RSSR_CHNL0_RX_STATUS_CHNL_IDLE, 1000);
+       if (trans->cfg->mq_rx_supported) {
+               iwl_write_prph(trans, RFH_RXF_DMA_CFG, 0);
+               return iwl_poll_prph_bit(trans, RFH_GEN_STATUS,
+                                          RXF_DMA_IDLE, RXF_DMA_IDLE, 1000);
+       } else {
+               iwl_write_direct32(trans, FH_MEM_RCSR_CHNL0_CONFIG_REG, 0);
+               return iwl_poll_direct_bit(trans, FH_MEM_RSSR_RX_STATUS_REG,
+                                          FH_RSSR_CHNL0_RX_STATUS_CHNL_IDLE,
+                                          1000);
+       }
 }
 
 /*
@@ -211,12 +211,8 @@ static void iwl_pcie_rxq_inc_wr_ptr(struct iwl_trans *trans,
        if (trans->cfg->mq_rx_supported)
                iwl_write32(trans, RFH_Q_FRBDCB_WIDX_TRG(rxq->id),
                            rxq->write_actual);
-       /*
-        * write to FH_RSCSR_CHNL0_WPTR register even in MQ as a W/A to
-        * hardware shadow registers bug - writing to RFH_Q_FRBDCB_WIDX will
-        * not wake the NIC.
-        */
-       iwl_write32(trans, FH_RSCSR_CHNL0_WPTR, rxq->write_actual);
+       else
+               iwl_write32(trans, FH_RSCSR_CHNL0_WPTR, rxq->write_actual);
 }
 
 static void iwl_pcie_rxq_check_wrptr(struct iwl_trans *trans)
@@ -237,10 +233,10 @@ static void iwl_pcie_rxq_check_wrptr(struct iwl_trans *trans)
 }
 
 /*
- * iwl_pcie_rxq_mq_restock - restock implementation for multi-queue rx
+ * iwl_pcie_rxmq_restock - restock implementation for multi-queue rx
  */
-static void iwl_pcie_rxq_mq_restock(struct iwl_trans *trans,
-                                   struct iwl_rxq *rxq)
+static void iwl_pcie_rxmq_restock(struct iwl_trans *trans,
+                                 struct iwl_rxq *rxq)
 {
        struct iwl_rx_mem_buffer *rxb;
 
@@ -263,7 +259,7 @@ static void iwl_pcie_rxq_mq_restock(struct iwl_trans *trans,
                rxb = list_first_entry(&rxq->rx_free, struct iwl_rx_mem_buffer,
                                       list);
                list_del(&rxb->list);
-
+               rxb->invalid = false;
                /* 12 first bits are expected to be empty */
                WARN_ON(rxb->page_dma & DMA_BIT_MASK(12));
                /* Point to Rx buffer via next RBD in circular buffer */
@@ -285,10 +281,10 @@ static void iwl_pcie_rxq_mq_restock(struct iwl_trans *trans,
 }
 
 /*
- * iwl_pcie_rxq_sq_restock - restock implementation for single queue rx
+ * iwl_pcie_rxsq_restock - restock implementation for single queue rx
  */
-static void iwl_pcie_rxq_sq_restock(struct iwl_trans *trans,
-                                   struct iwl_rxq *rxq)
+static void iwl_pcie_rxsq_restock(struct iwl_trans *trans,
+                                 struct iwl_rxq *rxq)
 {
        struct iwl_rx_mem_buffer *rxb;
 
@@ -314,6 +310,7 @@ static void iwl_pcie_rxq_sq_restock(struct iwl_trans *trans,
                rxb = list_first_entry(&rxq->rx_free, struct iwl_rx_mem_buffer,
                                       list);
                list_del(&rxb->list);
+               rxb->invalid = false;
 
                /* Point to Rx buffer via next RBD in circular buffer */
                bd[rxq->write] = iwl_pcie_dma_addr2rbd_ptr(rxb->page_dma);
@@ -347,9 +344,9 @@ static
 void iwl_pcie_rxq_restock(struct iwl_trans *trans, struct iwl_rxq *rxq)
 {
        if (trans->cfg->mq_rx_supported)
-               iwl_pcie_rxq_mq_restock(trans, rxq);
+               iwl_pcie_rxmq_restock(trans, rxq);
        else
-               iwl_pcie_rxq_sq_restock(trans, rxq);
+               iwl_pcie_rxsq_restock(trans, rxq);
 }
 
 /*
@@ -764,6 +761,23 @@ static void iwl_pcie_rx_hw_init(struct iwl_trans *trans, struct iwl_rxq *rxq)
                iwl_set_bit(trans, CSR_INT_COALESCING, IWL_HOST_INT_OPER_MODE);
 }
 
+void iwl_pcie_enable_rx_wake(struct iwl_trans *trans, bool enable)
+{
+       /*
+        * Turn on the chicken-bits that cause MAC wakeup for RX-related
+        * values.
+        * This costs some power, but needed for W/A 9000 integrated A-step
+        * bug where shadow registers are not in the retention list and their
+        * value is lost when NIC powers down
+        */
+       if (trans->cfg->integrated) {
+               iwl_set_bit(trans, CSR_MAC_SHADOW_REG_CTRL,
+                           CSR_MAC_SHADOW_REG_CTRL_RX_WAKE);
+               iwl_set_bit(trans, CSR_MAC_SHADOW_REG_CTL2,
+                           CSR_MAC_SHADOW_REG_CTL2_RX_WAKE);
+       }
+}
+
 static void iwl_pcie_rx_mq_hw_init(struct iwl_trans *trans)
 {
        struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
@@ -796,17 +810,17 @@ static void iwl_pcie_rx_mq_hw_init(struct iwl_trans *trans)
 
        for (i = 0; i < trans->num_rx_queues; i++) {
                /* Tell device where to find RBD free table in DRAM */
-               iwl_pcie_write_prph_64_no_grab(trans,
-                                              RFH_Q_FRBDCB_BA_LSB(i),
-                                              trans_pcie->rxq[i].bd_dma);
+               iwl_write_prph64_no_grab(trans,
+                                        RFH_Q_FRBDCB_BA_LSB(i),
+                                        trans_pcie->rxq[i].bd_dma);
                /* Tell device where to find RBD used table in DRAM */
-               iwl_pcie_write_prph_64_no_grab(trans,
-                                              RFH_Q_URBDCB_BA_LSB(i),
-                                              trans_pcie->rxq[i].used_bd_dma);
+               iwl_write_prph64_no_grab(trans,
+                                        RFH_Q_URBDCB_BA_LSB(i),
+                                        trans_pcie->rxq[i].used_bd_dma);
                /* Tell device where in DRAM to update its Rx status */
-               iwl_pcie_write_prph_64_no_grab(trans,
-                                              RFH_Q_URBD_STTS_WPTR_LSB(i),
-                                              trans_pcie->rxq[i].rb_stts_dma);
+               iwl_write_prph64_no_grab(trans,
+                                        RFH_Q_URBD_STTS_WPTR_LSB(i),
+                                        trans_pcie->rxq[i].rb_stts_dma);
                /* Reset device indice tables */
                iwl_write_prph_no_grab(trans, RFH_Q_FRBDCB_WIDX(i), 0);
                iwl_write_prph_no_grab(trans, RFH_Q_FRBDCB_RIDX(i), 0);
@@ -815,33 +829,32 @@ static void iwl_pcie_rx_mq_hw_init(struct iwl_trans *trans)
                enabled |= BIT(i) | BIT(i + 16);
        }
 
-       /* restock default queue */
-       iwl_pcie_rxq_mq_restock(trans, &trans_pcie->rxq[0]);
-
        /*
         * Enable Rx DMA
-        * Single frame mode
         * Rx buffer size 4 or 8k or 12k
         * Min RB size 4 or 8
         * Drop frames that exceed RB size
         * 512 RBDs
         */
        iwl_write_prph_no_grab(trans, RFH_RXF_DMA_CFG,
-                              RFH_DMA_EN_ENABLE_VAL |
-                              rb_size | RFH_RXF_DMA_SINGLE_FRAME_MASK |
+                              RFH_DMA_EN_ENABLE_VAL | rb_size |
                               RFH_RXF_DMA_MIN_RB_4_8 |
                               RFH_RXF_DMA_DROP_TOO_LARGE_MASK |
                               RFH_RXF_DMA_RBDCB_SIZE_512);
 
        /*
         * Activate DMA snooping.
-        * Set RX DMA chunk size to 64B
+        * Set RX DMA chunk size to 64B for IOSF and 128B for PCIe
         * Default queue is 0
         */
        iwl_write_prph_no_grab(trans, RFH_GEN_CFG, RFH_GEN_CFG_RFH_DMA_SNOOP |
                               (DEFAULT_RXQ_NUM <<
                                RFH_GEN_CFG_DEFAULT_RXQ_NUM_POS) |
-                              RFH_GEN_CFG_SERVICE_DMA_SNOOP);
+                              RFH_GEN_CFG_SERVICE_DMA_SNOOP |
+                              (trans->cfg->integrated ?
+                               RFH_GEN_CFG_RB_CHUNK_SIZE_64 :
+                               RFH_GEN_CFG_RB_CHUNK_SIZE_128) <<
+                              RFH_GEN_CFG_RB_CHUNK_SIZE_POS);
        /* Enable the relevant rx queues */
        iwl_write_prph_no_grab(trans, RFH_RXF_RXQ_ACTIVE, enabled);
 
@@ -849,6 +862,8 @@ static void iwl_pcie_rx_mq_hw_init(struct iwl_trans *trans)
 
        /* Set interrupt coalescing timer to default (2048 usecs) */
        iwl_write8(trans, CSR_INT_COALESCING, IWL_HOST_INT_TIMEOUT_DEF);
+
+       iwl_pcie_enable_rx_wake(trans, true);
 }
 
 static void iwl_pcie_rx_init_rxb_lists(struct iwl_rxq *rxq)
@@ -939,16 +954,18 @@ int iwl_pcie_rx_init(struct iwl_trans *trans)
                else
                        list_add(&rxb->list, &def_rxq->rx_used);
                trans_pcie->global_table[i] = rxb;
-               rxb->vid = (u16)i;
+               rxb->vid = (u16)(i + 1);
+               rxb->invalid = true;
        }
 
        iwl_pcie_rxq_alloc_rbs(trans, GFP_KERNEL, def_rxq);
-       if (trans->cfg->mq_rx_supported) {
+
+       if (trans->cfg->mq_rx_supported)
                iwl_pcie_rx_mq_hw_init(trans);
-       } else {
-               iwl_pcie_rxq_sq_restock(trans, def_rxq);
+       else
                iwl_pcie_rx_hw_init(trans, def_rxq);
-       }
+
+       iwl_pcie_rxq_restock(trans, def_rxq);
 
        spin_lock(&def_rxq->lock);
        iwl_pcie_rxq_inc_wr_ptr(trans, def_rxq);
@@ -1087,6 +1104,9 @@ static void iwl_pcie_rx_handle_rb(struct iwl_trans *trans,
                if (pkt->len_n_flags == cpu_to_le32(FH_RSCSR_FRAME_INVALID))
                        break;
 
+               WARN_ON((le32_to_cpu(pkt->len_n_flags) & FH_RSCSR_RXQ_MASK) >>
+                       FH_RSCSR_RXQ_POS != rxq->id);
+
                IWL_DEBUG_RX(trans,
                             "cmd at offset %d: %s (0x%.2x, seq 0x%x)\n",
                             rxcb._offset,
@@ -1224,10 +1244,19 @@ restart:
                         */
                        u16 vid = le32_to_cpu(rxq->used_bd[i]) & 0x0FFF;
 
-                       if (WARN(vid >= ARRAY_SIZE(trans_pcie->global_table),
-                                "Invalid rxb index from HW %u\n", (u32)vid))
+                       if (WARN(!vid ||
+                                vid > ARRAY_SIZE(trans_pcie->global_table),
+                                "Invalid rxb index from HW %u\n", (u32)vid)) {
+                               iwl_force_nmi(trans);
+                               goto out;
+                       }
+                       rxb = trans_pcie->global_table[vid - 1];
+                       if (WARN(rxb->invalid,
+                                "Invalid rxb from HW %u\n", (u32)vid)) {
+                               iwl_force_nmi(trans);
                                goto out;
-                       rxb = trans_pcie->global_table[vid];
+                       }
+                       rxb->invalid = true;
                } else {
                        rxb = rxq->queue[i];
                        rxq->queue[i] = NULL;
@@ -1507,7 +1536,7 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
                 * have anything to service
                 */
                if (test_bit(STATUS_INT_ENABLED, &trans->status))
-                       iwl_enable_interrupts(trans);
+                       _iwl_enable_interrupts(trans);
                spin_unlock(&trans_pcie->irq_lock);
                lock_map_release(&trans->sync_cmd_lockdep_map);
                return IRQ_NONE;
@@ -1699,15 +1728,17 @@ irqreturn_t iwl_pcie_irq_handler(int irq, void *dev_id)
                         inta & ~trans_pcie->inta_mask);
        }
 
+       spin_lock(&trans_pcie->irq_lock);
+       /* only Re-enable all interrupt if disabled by irq */
+       if (test_bit(STATUS_INT_ENABLED, &trans->status))
+               _iwl_enable_interrupts(trans);
        /* we are loading the firmware, enable FH_TX interrupt only */
-       if (handled & CSR_INT_BIT_FH_TX)
+       else if (handled & CSR_INT_BIT_FH_TX)
                iwl_enable_fw_load_int(trans);
-       /* only Re-enable all interrupt if disabled by irq */
-       else if (test_bit(STATUS_INT_ENABLED, &trans->status))
-               iwl_enable_interrupts(trans);
        /* Re-enable RF_KILL if it occurred */
        else if (handled & CSR_INT_BIT_RF_KILL)
                iwl_enable_rfkill_int(trans);
+       spin_unlock(&trans_pcie->irq_lock);
 
 out:
        lock_map_release(&trans->sync_cmd_lockdep_map);
@@ -1771,7 +1802,7 @@ void iwl_pcie_reset_ict(struct iwl_trans *trans)
                return;
 
        spin_lock(&trans_pcie->irq_lock);
-       iwl_disable_interrupts(trans);
+       _iwl_disable_interrupts(trans);
 
        memset(trans_pcie->ict_tbl, 0, ICT_SIZE);
 
@@ -1787,7 +1818,7 @@ void iwl_pcie_reset_ict(struct iwl_trans *trans)
        trans_pcie->use_ict = true;
        trans_pcie->ict_index = 0;
        iwl_write32(trans, CSR_INT, trans_pcie->inta_mask);
-       iwl_enable_interrupts(trans);
+       _iwl_enable_interrupts(trans);
        spin_unlock(&trans_pcie->irq_lock);
 }
 
index f603d78..74f2f03 100644 (file)
@@ -608,18 +608,10 @@ static int iwl_pcie_prepare_card_hw(struct iwl_trans *trans)
 /*
  * ucode
  */
-static int iwl_pcie_load_firmware_chunk(struct iwl_trans *trans, u32 dst_addr,
-                                  dma_addr_t phy_addr, u32 byte_cnt)
+static void iwl_pcie_load_firmware_chunk_fh(struct iwl_trans *trans,
+                                           u32 dst_addr, dma_addr_t phy_addr,
+                                           u32 byte_cnt)
 {
-       struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
-       unsigned long flags;
-       int ret;
-
-       trans_pcie->ucode_write_complete = false;
-
-       if (!iwl_trans_grab_nic_access(trans, &flags))
-               return -EIO;
-
        iwl_write32(trans, FH_TCSR_CHNL_TX_CONFIG_REG(FH_SRVC_CHNL),
                    FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_PAUSE);
 
@@ -642,7 +634,50 @@ static int iwl_pcie_load_firmware_chunk(struct iwl_trans *trans, u32 dst_addr,
                    FH_TCSR_TX_CONFIG_REG_VAL_DMA_CHNL_ENABLE |
                    FH_TCSR_TX_CONFIG_REG_VAL_DMA_CREDIT_DISABLE |
                    FH_TCSR_TX_CONFIG_REG_VAL_CIRQ_HOST_ENDTFD);
+}
+
+static void iwl_pcie_load_firmware_chunk_tfh(struct iwl_trans *trans,
+                                            u32 dst_addr, dma_addr_t phy_addr,
+                                            u32 byte_cnt)
+{
+       /* Stop DMA channel */
+       iwl_write32(trans, TFH_SRV_DMA_CHNL0_CTRL, 0);
+
+       /* Configure SRAM address */
+       iwl_write32(trans, TFH_SRV_DMA_CHNL0_SRAM_ADDR,
+                   dst_addr);
+
+       /* Configure DRAM address - 64 bit */
+       iwl_write64(trans, TFH_SRV_DMA_CHNL0_DRAM_ADDR, phy_addr);
+
+       /* Configure byte count to transfer */
+       iwl_write32(trans, TFH_SRV_DMA_CHNL0_BC, byte_cnt);
 
+       /* Enable the DRAM2SRAM to start */
+       iwl_write32(trans, TFH_SRV_DMA_CHNL0_CTRL, TFH_SRV_DMA_SNOOP |
+                                                  TFH_SRV_DMA_TO_DRIVER |
+                                                  TFH_SRV_DMA_START);
+}
+
+static int iwl_pcie_load_firmware_chunk(struct iwl_trans *trans,
+                                       u32 dst_addr, dma_addr_t phy_addr,
+                                       u32 byte_cnt)
+{
+       struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+       unsigned long flags;
+       int ret;
+
+       trans_pcie->ucode_write_complete = false;
+
+       if (!iwl_trans_grab_nic_access(trans, &flags))
+               return -EIO;
+
+       if (trans->cfg->use_tfh)
+               iwl_pcie_load_firmware_chunk_tfh(trans, dst_addr, phy_addr,
+                                                byte_cnt);
+       else
+               iwl_pcie_load_firmware_chunk_fh(trans, dst_addr, phy_addr,
+                                               byte_cnt);
        iwl_trans_release_nic_access(trans, &flags);
 
        ret = wait_event_timeout(trans_pcie->ucode_write_waitq,
@@ -801,6 +836,8 @@ static int iwl_pcie_load_cpu_sections_8000(struct iwl_trans *trans,
 
        *first_ucode_section = last_read_idx;
 
+       iwl_enable_interrupts(trans);
+
        if (cpu == 1)
                iwl_write_direct32(trans, FH_UCODE_LOAD_STATUS, 0xFFFF);
        else
@@ -980,6 +1017,8 @@ static int iwl_pcie_load_given_ucode(struct iwl_trans *trans,
                iwl_pcie_apply_destination(trans);
        }
 
+       iwl_enable_interrupts(trans);
+
        /* release CPU reset */
        iwl_write32(trans, CSR_RESET, 0);
 
@@ -1033,9 +1072,7 @@ static void _iwl_trans_pcie_stop_device(struct iwl_trans *trans, bool low_power)
        was_hw_rfkill = iwl_is_rfkill_set(trans);
 
        /* tell the device to stop sending interrupts */
-       spin_lock(&trans_pcie->irq_lock);
        iwl_disable_interrupts(trans);
-       spin_unlock(&trans_pcie->irq_lock);
 
        /* device going down, Stop using ICT table */
        iwl_pcie_disable_ict(trans);
@@ -1079,9 +1116,7 @@ static void _iwl_trans_pcie_stop_device(struct iwl_trans *trans, bool low_power)
         * the time, unless the interrupt is ACKed even if the interrupt
         * should be masked. Re-ACK all the interrupts here.
         */
-       spin_lock(&trans_pcie->irq_lock);
        iwl_disable_interrupts(trans);
-       spin_unlock(&trans_pcie->irq_lock);
 
        /* clear all status bits */
        clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status);
@@ -1215,7 +1250,6 @@ static int iwl_trans_pcie_start_fw(struct iwl_trans *trans,
                ret = iwl_pcie_load_given_ucode_8000(trans, fw);
        else
                ret = iwl_pcie_load_given_ucode(trans, fw);
-       iwl_enable_interrupts(trans);
 
        /* re-check RF-Kill state since we may have missed the interrupt */
        hw_rfkill = iwl_is_rfkill_set(trans);
@@ -1286,6 +1320,8 @@ static void iwl_trans_pcie_d3_suspend(struct iwl_trans *trans, bool test,
        iwl_clear_bit(trans, CSR_GP_CNTRL,
                      CSR_GP_CNTRL_REG_FLAG_INIT_DONE);
 
+       iwl_pcie_enable_rx_wake(trans, false);
+
        if (reset) {
                /*
                 * reset TX queues -- some of their registers reset during S3
@@ -1311,6 +1347,8 @@ static int iwl_trans_pcie_d3_resume(struct iwl_trans *trans,
                return 0;
        }
 
+       iwl_pcie_enable_rx_wake(trans, true);
+
        /*
         * Also enables interrupts - none will happen as the device doesn't
         * know we're waking it up, only when the opmode actually tells it
@@ -1389,8 +1427,12 @@ static void iwl_pcie_init_msix(struct iwl_trans_pcie *trans_pcie)
 
        max_rx_vector = trans_pcie->allocated_vector - 1;
 
-       if (!trans_pcie->msix_enabled)
+       if (!trans_pcie->msix_enabled) {
+               if (trans->cfg->mq_rx_supported)
+                       iwl_write_prph(trans, UREG_CHICK,
+                                      UREG_CHICK_MSI_ENABLE);
                return;
+       }
 
        iwl_write_prph(trans, UREG_CHICK, UREG_CHICK_MSIX_ENABLE);
 
@@ -1567,15 +1609,11 @@ static void iwl_trans_pcie_op_mode_leave(struct iwl_trans *trans)
        mutex_lock(&trans_pcie->mutex);
 
        /* disable interrupts - don't enable HW RF kill interrupt */
-       spin_lock(&trans_pcie->irq_lock);
        iwl_disable_interrupts(trans);
-       spin_unlock(&trans_pcie->irq_lock);
 
        iwl_pcie_apm_stop(trans, true);
 
-       spin_lock(&trans_pcie->irq_lock);
        iwl_disable_interrupts(trans);
-       spin_unlock(&trans_pcie->irq_lock);
 
        iwl_pcie_disable_ict(trans);
 
@@ -1639,6 +1677,9 @@ static void iwl_trans_pcie_configure(struct iwl_trans *trans,
        trans_pcie->scd_set_active = trans_cfg->scd_set_active;
        trans_pcie->sw_csum_tx = trans_cfg->sw_csum_tx;
 
+       trans_pcie->page_offs = trans_cfg->cb_data_offs;
+       trans_pcie->dev_cmd_offs = trans_cfg->cb_data_offs + sizeof(void *);
+
        trans->command_groups = trans_cfg->command_groups;
        trans->command_groups_size = trans_cfg->command_groups_size;
 
@@ -1909,6 +1950,48 @@ static void iwl_trans_pcie_block_txq_ptrs(struct iwl_trans *trans, bool block)
 
 #define IWL_FLUSH_WAIT_MS      2000
 
+void iwl_trans_pcie_log_scd_error(struct iwl_trans *trans, struct iwl_txq *txq)
+{
+       struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+       u32 scd_sram_addr;
+       u8 buf[16];
+       int cnt;
+
+       IWL_ERR(trans, "Current SW read_ptr %d write_ptr %d\n",
+               txq->q.read_ptr, txq->q.write_ptr);
+
+       scd_sram_addr = trans_pcie->scd_base_addr +
+                       SCD_TX_STTS_QUEUE_OFFSET(txq->q.id);
+       iwl_trans_read_mem_bytes(trans, scd_sram_addr, buf, sizeof(buf));
+
+       iwl_print_hex_error(trans, buf, sizeof(buf));
+
+       for (cnt = 0; cnt < FH_TCSR_CHNL_NUM; cnt++)
+               IWL_ERR(trans, "FH TRBs(%d) = 0x%08x\n", cnt,
+                       iwl_read_direct32(trans, FH_TX_TRB_REG(cnt)));
+
+       for (cnt = 0; cnt < trans->cfg->base_params->num_of_queues; cnt++) {
+               u32 status = iwl_read_prph(trans, SCD_QUEUE_STATUS_BITS(cnt));
+               u8 fifo = (status >> SCD_QUEUE_STTS_REG_POS_TXF) & 0x7;
+               bool active = !!(status & BIT(SCD_QUEUE_STTS_REG_POS_ACTIVE));
+               u32 tbl_dw =
+                       iwl_trans_read_mem32(trans, trans_pcie->scd_base_addr +
+                                            SCD_TRANS_TBL_OFFSET_QUEUE(cnt));
+
+               if (cnt & 0x1)
+                       tbl_dw = (tbl_dw & 0xFFFF0000) >> 16;
+               else
+                       tbl_dw = tbl_dw & 0x0000FFFF;
+
+               IWL_ERR(trans,
+                       "Q %d is %sactive and mapped to fifo %d ra_tid 0x%04x [%d,%d]\n",
+                       cnt, active ? "" : "in", fifo, tbl_dw,
+                       iwl_read_prph(trans, SCD_QUEUE_RDPTR(cnt)) &
+                               (TFD_QUEUE_SIZE_MAX - 1),
+                       iwl_read_prph(trans, SCD_QUEUE_WRPTR(cnt)));
+       }
+}
+
 static int iwl_trans_pcie_wait_txq_empty(struct iwl_trans *trans, u32 txq_bm)
 {
        struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
@@ -1916,8 +1999,6 @@ static int iwl_trans_pcie_wait_txq_empty(struct iwl_trans *trans, u32 txq_bm)
        struct iwl_queue *q;
        int cnt;
        unsigned long now = jiffies;
-       u32 scd_sram_addr;
-       u8 buf[16];
        int ret = 0;
 
        /* waiting for all the tx frames complete might take a while */
@@ -1957,42 +2038,8 @@ static int iwl_trans_pcie_wait_txq_empty(struct iwl_trans *trans, u32 txq_bm)
                IWL_DEBUG_TX_QUEUES(trans, "Queue %d is now empty.\n", cnt);
        }
 
-       if (!ret)
-               return 0;
-
-       IWL_ERR(trans, "Current SW read_ptr %d write_ptr %d\n",
-               txq->q.read_ptr, txq->q.write_ptr);
-
-       scd_sram_addr = trans_pcie->scd_base_addr +
-                       SCD_TX_STTS_QUEUE_OFFSET(txq->q.id);
-       iwl_trans_read_mem_bytes(trans, scd_sram_addr, buf, sizeof(buf));
-
-       iwl_print_hex_error(trans, buf, sizeof(buf));
-
-       for (cnt = 0; cnt < FH_TCSR_CHNL_NUM; cnt++)
-               IWL_ERR(trans, "FH TRBs(%d) = 0x%08x\n", cnt,
-                       iwl_read_direct32(trans, FH_TX_TRB_REG(cnt)));
-
-       for (cnt = 0; cnt < trans->cfg->base_params->num_of_queues; cnt++) {
-               u32 status = iwl_read_prph(trans, SCD_QUEUE_STATUS_BITS(cnt));
-               u8 fifo = (status >> SCD_QUEUE_STTS_REG_POS_TXF) & 0x7;
-               bool active = !!(status & BIT(SCD_QUEUE_STTS_REG_POS_ACTIVE));
-               u32 tbl_dw =
-                       iwl_trans_read_mem32(trans, trans_pcie->scd_base_addr +
-                                            SCD_TRANS_TBL_OFFSET_QUEUE(cnt));
-
-               if (cnt & 0x1)
-                       tbl_dw = (tbl_dw & 0xFFFF0000) >> 16;
-               else
-                       tbl_dw = tbl_dw & 0x0000FFFF;
-
-               IWL_ERR(trans,
-                       "Q %d is %sactive and mapped to fifo %d ra_tid 0x%04x [%d,%d]\n",
-                       cnt, active ? "" : "in", fifo, tbl_dw,
-                       iwl_read_prph(trans, SCD_QUEUE_RDPTR(cnt)) &
-                               (TFD_QUEUE_SIZE_MAX - 1),
-                       iwl_read_prph(trans, SCD_QUEUE_WRPTR(cnt)));
-       }
+       if (ret)
+               iwl_trans_pcie_log_scd_error(trans, txq);
 
        return ret;
 }
@@ -2741,6 +2788,8 @@ static const struct iwl_trans_ops trans_ops_pcie = {
        .txq_disable = iwl_trans_pcie_txq_disable,
        .txq_enable = iwl_trans_pcie_txq_enable,
 
+       .txq_set_shared_mode = iwl_trans_pcie_txq_set_shared_mode,
+
        .wait_tx_queue_empty = iwl_trans_pcie_wait_txq_empty,
        .freeze_txq_timer = iwl_trans_pcie_freeze_txq_timer,
        .block_txq_ptrs = iwl_trans_pcie_block_txq_ptrs,
index d6beac9..18650dc 100644 (file)
@@ -70,6 +70,7 @@
  * Tx queue resumed.
  *
  ***************************************************/
+
 static int iwl_queue_space(const struct iwl_queue *q)
 {
        unsigned int max;
@@ -154,10 +155,6 @@ static void iwl_pcie_txq_stuck_timer(unsigned long data)
        struct iwl_txq *txq = (void *)data;
        struct iwl_trans_pcie *trans_pcie = txq->trans_pcie;
        struct iwl_trans *trans = iwl_trans_pcie_get_trans(trans_pcie);
-       u32 scd_sram_addr = trans_pcie->scd_base_addr +
-                               SCD_TX_STTS_QUEUE_OFFSET(txq->q.id);
-       u8 buf[16];
-       int i;
 
        spin_lock(&txq->lock);
        /* check if triggered erroneously */
@@ -169,38 +166,8 @@ static void iwl_pcie_txq_stuck_timer(unsigned long data)
 
        IWL_ERR(trans, "Queue %d stuck for %u ms.\n", txq->q.id,
                jiffies_to_msecs(txq->wd_timeout));
-       IWL_ERR(trans, "Current SW read_ptr %d write_ptr %d\n",
-               txq->q.read_ptr, txq->q.write_ptr);
-
-       iwl_trans_read_mem_bytes(trans, scd_sram_addr, buf, sizeof(buf));
-
-       iwl_print_hex_error(trans, buf, sizeof(buf));
-
-       for (i = 0; i < FH_TCSR_CHNL_NUM; i++)
-               IWL_ERR(trans, "FH TRBs(%d) = 0x%08x\n", i,
-                       iwl_read_direct32(trans, FH_TX_TRB_REG(i)));
-
-       for (i = 0; i < trans->cfg->base_params->num_of_queues; i++) {
-               u32 status = iwl_read_prph(trans, SCD_QUEUE_STATUS_BITS(i));
-               u8 fifo = (status >> SCD_QUEUE_STTS_REG_POS_TXF) & 0x7;
-               bool active = !!(status & BIT(SCD_QUEUE_STTS_REG_POS_ACTIVE));
-               u32 tbl_dw =
-                       iwl_trans_read_mem32(trans,
-                                            trans_pcie->scd_base_addr +
-                                            SCD_TRANS_TBL_OFFSET_QUEUE(i));
-
-               if (i & 0x1)
-                       tbl_dw = (tbl_dw & 0xFFFF0000) >> 16;
-               else
-                       tbl_dw = tbl_dw & 0x0000FFFF;
 
-               IWL_ERR(trans,
-                       "Q %d is %sactive and mapped to fifo %d ra_tid 0x%04x [%d,%d]\n",
-                       i, active ? "" : "in", fifo, tbl_dw,
-                       iwl_read_prph(trans, SCD_QUEUE_RDPTR(i)) &
-                               (TFD_QUEUE_SIZE_MAX - 1),
-                       iwl_read_prph(trans, SCD_QUEUE_WRPTR(i)));
-       }
+       iwl_trans_pcie_log_scd_error(trans, txq);
 
        iwl_force_nmi(trans);
 }
@@ -393,7 +360,7 @@ static void iwl_pcie_tfd_unmap(struct iwl_trans *trans,
                return;
        }
 
-       /* first TB is never freed - it's the scratchbuf data */
+       /* first TB is never freed - it's the bidirectional DMA data */
 
        for (i = 1; i < num_tbs; i++) {
                if (meta->flags & BIT(i + CMD_TB_BITMAP_POS))
@@ -491,7 +458,7 @@ static int iwl_pcie_txq_alloc(struct iwl_trans *trans,
 {
        struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
        size_t tfd_sz = sizeof(struct iwl_tfd) * TFD_QUEUE_SIZE_MAX;
-       size_t scratchbuf_sz;
+       size_t tb0_buf_sz;
        int i;
 
        if (WARN_ON(txq->entries || txq->tfds))
@@ -526,17 +493,14 @@ static int iwl_pcie_txq_alloc(struct iwl_trans *trans,
        if (!txq->tfds)
                goto error;
 
-       BUILD_BUG_ON(IWL_HCMD_SCRATCHBUF_SIZE != sizeof(*txq->scratchbufs));
-       BUILD_BUG_ON(offsetof(struct iwl_pcie_txq_scratch_buf, scratch) !=
-                       sizeof(struct iwl_cmd_header) +
-                       offsetof(struct iwl_tx_cmd, scratch));
+       BUILD_BUG_ON(IWL_FIRST_TB_SIZE_ALIGN != sizeof(*txq->first_tb_bufs));
 
-       scratchbuf_sz = sizeof(*txq->scratchbufs) * slots_num;
+       tb0_buf_sz = sizeof(*txq->first_tb_bufs) * slots_num;
 
-       txq->scratchbufs = dma_alloc_coherent(trans->dev, scratchbuf_sz,
-                                             &txq->scratchbufs_dma,
+       txq->first_tb_bufs = dma_alloc_coherent(trans->dev, tb0_buf_sz,
+                                             &txq->first_tb_dma,
                                              GFP_KERNEL);
-       if (!txq->scratchbufs)
+       if (!txq->first_tb_bufs)
                goto err_free_tfds;
 
        txq->q.id = txq_id;
@@ -578,22 +542,27 @@ static int iwl_pcie_txq_init(struct iwl_trans *trans, struct iwl_txq *txq,
         * Tell nic where to find circular buffer of Tx Frame Descriptors for
         * given Tx queue, and enable the DMA channel used for that queue.
         * Circular buffer (TFD queue in DRAM) physical base address */
-       iwl_write_direct32(trans, FH_MEM_CBBC_QUEUE(txq_id),
-                          txq->q.dma_addr >> 8);
+       if (trans->cfg->use_tfh)
+               iwl_write_direct64(trans,
+                                  FH_MEM_CBBC_QUEUE(trans, txq_id),
+                                  txq->q.dma_addr);
+       else
+               iwl_write_direct32(trans, FH_MEM_CBBC_QUEUE(trans, txq_id),
+                                  txq->q.dma_addr >> 8);
 
        return 0;
 }
 
-static void iwl_pcie_free_tso_page(struct sk_buff *skb)
+static void iwl_pcie_free_tso_page(struct iwl_trans_pcie *trans_pcie,
+                                  struct sk_buff *skb)
 {
-       struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+       struct page **page_ptr;
 
-       if (info->driver_data[IWL_TRANS_FIRST_DRIVER_DATA]) {
-               struct page *page =
-                       info->driver_data[IWL_TRANS_FIRST_DRIVER_DATA];
+       page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs);
 
-               __free_page(page);
-               info->driver_data[IWL_TRANS_FIRST_DRIVER_DATA] = NULL;
+       if (*page_ptr) {
+               __free_page(*page_ptr);
+               *page_ptr = NULL;
        }
 }
 
@@ -639,7 +608,7 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
                        if (WARN_ON_ONCE(!skb))
                                continue;
 
-                       iwl_pcie_free_tso_page(skb);
+                       iwl_pcie_free_tso_page(trans_pcie, skb);
                }
                iwl_pcie_txq_free_tfd(trans, txq);
                q->read_ptr = iwl_queue_inc_wrap(q->read_ptr);
@@ -708,8 +677,8 @@ static void iwl_pcie_txq_free(struct iwl_trans *trans, int txq_id)
                txq->tfds = NULL;
 
                dma_free_coherent(dev,
-                                 sizeof(*txq->scratchbufs) * txq->q.n_window,
-                                 txq->scratchbufs, txq->scratchbufs_dma);
+                                 sizeof(*txq->first_tb_bufs) * txq->q.n_window,
+                                 txq->first_tb_bufs, txq->first_tb_dma);
        }
 
        kfree(txq->entries);
@@ -786,9 +755,14 @@ void iwl_trans_pcie_tx_reset(struct iwl_trans *trans)
        for (txq_id = 0; txq_id < trans->cfg->base_params->num_of_queues;
             txq_id++) {
                struct iwl_txq *txq = &trans_pcie->txq[txq_id];
-
-               iwl_write_direct32(trans, FH_MEM_CBBC_QUEUE(txq_id),
-                                  txq->q.dma_addr >> 8);
+               if (trans->cfg->use_tfh)
+                       iwl_write_direct64(trans,
+                                          FH_MEM_CBBC_QUEUE(trans, txq_id),
+                                          txq->q.dma_addr);
+               else
+                       iwl_write_direct32(trans,
+                                          FH_MEM_CBBC_QUEUE(trans, txq_id),
+                                          txq->q.dma_addr >> 8);
                iwl_pcie_txq_unmap(trans, txq_id);
                txq->q.read_ptr = 0;
                txq->q.write_ptr = 0;
@@ -996,6 +970,12 @@ int iwl_pcie_tx_init(struct iwl_trans *trans)
                }
        }
 
+       if (trans->cfg->use_tfh)
+               iwl_write_direct32(trans, TFH_TRANSFER_MODE,
+                                  TFH_TRANSFER_MAX_PENDING_REQ |
+                                  TFH_CHUNK_SIZE_128 |
+                                  TFH_CHUNK_SPLIT_MODE);
+
        iwl_set_bits_prph(trans, SCD_GP_CTRL, SCD_GP_CTRL_AUTO_ACTIVE_MODE);
        if (trans->cfg->base_params->num_of_queues > 20)
                iwl_set_bits_prph(trans, SCD_GP_CTRL,
@@ -1084,7 +1064,7 @@ void iwl_trans_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
                if (WARN_ON_ONCE(!skb))
                        continue;
 
-               iwl_pcie_free_tso_page(skb);
+               iwl_pcie_free_tso_page(trans_pcie, skb);
 
                __skb_queue_tail(skbs, skb);
 
@@ -1115,17 +1095,17 @@ void iwl_trans_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
 
                while (!skb_queue_empty(&overflow_skbs)) {
                        struct sk_buff *skb = __skb_dequeue(&overflow_skbs);
-                       struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
-                       u8 dev_cmd_idx = IWL_TRANS_FIRST_DRIVER_DATA + 1;
-                       struct iwl_device_cmd *dev_cmd =
-                               info->driver_data[dev_cmd_idx];
+                       struct iwl_device_cmd *dev_cmd_ptr;
+
+                       dev_cmd_ptr = *(void **)((u8 *)skb->cb +
+                                                trans_pcie->dev_cmd_offs);
 
                        /*
                         * Note that we can very well be overflowing again.
                         * In that case, iwl_queue_space will be small again
                         * and we won't wake mac80211's queue.
                         */
-                       iwl_trans_pcie_tx(trans, skb, dev_cmd, txq_id);
+                       iwl_trans_pcie_tx(trans, skb, dev_cmd_ptr, txq_id);
                }
                spin_lock_bh(&txq->lock);
 
@@ -1354,6 +1334,15 @@ void iwl_trans_pcie_txq_enable(struct iwl_trans *trans, int txq_id, u16 ssn,
        txq->active = true;
 }
 
+void iwl_trans_pcie_txq_set_shared_mode(struct iwl_trans *trans, u32 txq_id,
+                                       bool shared_mode)
+{
+       struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+       struct iwl_txq *txq = &trans_pcie->txq[txq_id];
+
+       txq->ampdu = !shared_mode;
+}
+
 void iwl_trans_pcie_txq_disable(struct iwl_trans *trans, int txq_id,
                                bool configure_scd)
 {
@@ -1413,7 +1402,7 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
        void *dup_buf = NULL;
        dma_addr_t phys_addr;
        int idx;
-       u16 copy_size, cmd_size, scratch_size;
+       u16 copy_size, cmd_size, tb0_size;
        bool had_nocopy = false;
        u8 group_id = iwl_cmd_groupid(cmd->id);
        int i, ret;
@@ -1444,9 +1433,9 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
                if (!cmd->len[i])
                        continue;
 
-               /* need at least IWL_HCMD_SCRATCHBUF_SIZE copied */
-               if (copy_size < IWL_HCMD_SCRATCHBUF_SIZE) {
-                       int copy = IWL_HCMD_SCRATCHBUF_SIZE - copy_size;
+               /* need at least IWL_FIRST_TB_SIZE copied */
+               if (copy_size < IWL_FIRST_TB_SIZE) {
+                       int copy = IWL_FIRST_TB_SIZE - copy_size;
 
                        if (copy > cmdlen[i])
                                copy = cmdlen[i];
@@ -1567,8 +1556,8 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
                }
 
                /*
-                * Otherwise we need at least IWL_HCMD_SCRATCHBUF_SIZE copied
-                * in total (for the scratchbuf handling), but copy up to what
+                * Otherwise we need at least IWL_FIRST_TB_SIZE copied
+                * in total (for bi-directional DMA), but copy up to what
                 * we can fit into the payload for debug dump purposes.
                 */
                copy = min_t(int, TFD_MAX_PAYLOAD_SIZE - cmd_pos, cmd->len[i]);
@@ -1577,8 +1566,8 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
                cmd_pos += copy;
 
                /* However, treat copy_size the proper way, we need it below */
-               if (copy_size < IWL_HCMD_SCRATCHBUF_SIZE) {
-                       copy = IWL_HCMD_SCRATCHBUF_SIZE - copy_size;
+               if (copy_size < IWL_FIRST_TB_SIZE) {
+                       copy = IWL_FIRST_TB_SIZE - copy_size;
 
                        if (copy > cmd->len[i])
                                copy = cmd->len[i];
@@ -1593,18 +1582,18 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
                     le16_to_cpu(out_cmd->hdr.sequence),
                     cmd_size, q->write_ptr, idx, trans_pcie->cmd_queue);
 
-       /* start the TFD with the scratchbuf */
-       scratch_size = min_t(int, copy_size, IWL_HCMD_SCRATCHBUF_SIZE);
-       memcpy(&txq->scratchbufs[q->write_ptr], &out_cmd->hdr, scratch_size);
+       /* start the TFD with the minimum copy bytes */
+       tb0_size = min_t(int, copy_size, IWL_FIRST_TB_SIZE);
+       memcpy(&txq->first_tb_bufs[idx], &out_cmd->hdr, tb0_size);
        iwl_pcie_txq_build_tfd(trans, txq,
-                              iwl_pcie_get_scratchbuf_dma(txq, q->write_ptr),
-                              scratch_size, true);
+                              iwl_pcie_get_first_tb_dma(txq, idx),
+                              tb0_size, true);
 
        /* map first command fragment, if any remains */
-       if (copy_size > scratch_size) {
+       if (copy_size > tb0_size) {
                phys_addr = dma_map_single(trans->dev,
-                                          ((u8 *)&out_cmd->hdr) + scratch_size,
-                                          copy_size - scratch_size,
+                                          ((u8 *)&out_cmd->hdr) + tb0_size,
+                                          copy_size - tb0_size,
                                           DMA_TO_DEVICE);
                if (dma_mapping_error(trans->dev, phys_addr)) {
                        iwl_pcie_tfd_unmap(trans, out_meta,
@@ -1614,7 +1603,7 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
                }
 
                iwl_pcie_txq_build_tfd(trans, txq, phys_addr,
-                                      copy_size - scratch_size, false);
+                                      copy_size - tb0_size, false);
        }
 
        /* map the remaining (adjusted) nocopy/dup fragments */
@@ -1959,7 +1948,7 @@ static int iwl_fill_data_tbs(struct iwl_trans *trans, struct sk_buff *skb,
        trace_iwlwifi_dev_tx(trans->dev, skb,
                             &txq->tfds[txq->q.write_ptr],
                             sizeof(struct iwl_tfd),
-                            &dev_cmd->hdr, IWL_HCMD_SCRATCHBUF_SIZE + tb1_len,
+                            &dev_cmd->hdr, IWL_FIRST_TB_SIZE + tb1_len,
                             skb->data + hdr_len, tb2_len);
        trace_iwlwifi_dev_tx_data(trans->dev, skb,
                                  hdr_len, skb->len - hdr_len);
@@ -2015,7 +2004,6 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
                                   struct iwl_cmd_meta *out_meta,
                                   struct iwl_device_cmd *dev_cmd, u16 tb1_len)
 {
-       struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
        struct iwl_trans_pcie *trans_pcie = txq->trans_pcie;
        struct ieee80211_hdr *hdr = (void *)skb->data;
        unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room;
@@ -2024,6 +2012,7 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
        u16 length, iv_len, amsdu_pad;
        u8 *start_hdr;
        struct iwl_tso_hdr_page *hdr_page;
+       struct page **page_ptr;
        int ret;
        struct tso_t tso;
 
@@ -2035,7 +2024,7 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
        trace_iwlwifi_dev_tx(trans->dev, skb,
                             &txq->tfds[txq->q.write_ptr],
                             sizeof(struct iwl_tfd),
-                            &dev_cmd->hdr, IWL_HCMD_SCRATCHBUF_SIZE + tb1_len,
+                            &dev_cmd->hdr, IWL_FIRST_TB_SIZE + tb1_len,
                             NULL, 0);
 
        ip_hdrlen = skb_transport_header(skb) - skb_network_header(skb);
@@ -2054,7 +2043,8 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
 
        get_page(hdr_page->page);
        start_hdr = hdr_page->pos;
-       info->driver_data[IWL_TRANS_FIRST_DRIVER_DATA] = hdr_page->page;
+       page_ptr = (void *)((u8 *)skb->cb + trans_pcie->page_offs);
+       *page_ptr = hdr_page->page;
        memcpy(hdr_page->pos, skb->data + hdr_len, iv_len);
        hdr_page->pos += iv_len;
 
@@ -2264,10 +2254,12 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
 
                /* don't put the packet on the ring, if there is no room */
                if (unlikely(iwl_queue_space(q) < 3)) {
-                       struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+                       struct iwl_device_cmd **dev_cmd_ptr;
+
+                       dev_cmd_ptr = (void *)((u8 *)skb->cb +
+                                              trans_pcie->dev_cmd_offs);
 
-                       info->driver_data[IWL_TRANS_FIRST_DRIVER_DATA + 1] =
-                               dev_cmd;
+                       *dev_cmd_ptr = dev_cmd;
                        __skb_queue_tail(&txq->overflow_q, skb);
 
                        spin_unlock(&txq->lock);
@@ -2294,7 +2286,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
                cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) |
                            INDEX_TO_SEQ(q->write_ptr)));
 
-       tb0_phys = iwl_pcie_get_scratchbuf_dma(txq, q->write_ptr);
+       tb0_phys = iwl_pcie_get_first_tb_dma(txq, q->write_ptr);
        scratch_phys = tb0_phys + sizeof(struct iwl_cmd_header) +
                       offsetof(struct iwl_tx_cmd, scratch);
 
@@ -2312,7 +2304,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
         * setup of the first TB)
         */
        len = sizeof(struct iwl_tx_cmd) + sizeof(struct iwl_cmd_header) +
-             hdr_len - IWL_HCMD_SCRATCHBUF_SIZE;
+             hdr_len - IWL_FIRST_TB_SIZE;
        /* do not align A-MSDU to dword as the subframe header aligns it */
        amsdu = ieee80211_is_data_qos(fc) &&
                (*ieee80211_get_qos_ctl(hdr) &
@@ -2326,17 +2318,17 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
                tb1_len = len;
        }
 
-       /* The first TB points to the scratchbuf data - min_copy bytes */
-       memcpy(&txq->scratchbufs[q->write_ptr], &dev_cmd->hdr,
-              IWL_HCMD_SCRATCHBUF_SIZE);
+       /* The first TB points to bi-directional DMA data */
+       memcpy(&txq->first_tb_bufs[q->write_ptr], &dev_cmd->hdr,
+              IWL_FIRST_TB_SIZE);
        iwl_pcie_txq_build_tfd(trans, txq, tb0_phys,
-                              IWL_HCMD_SCRATCHBUF_SIZE, true);
+                              IWL_FIRST_TB_SIZE, true);
 
        /* there must be data left over for TB1 or this code must be changed */
-       BUILD_BUG_ON(sizeof(struct iwl_tx_cmd) < IWL_HCMD_SCRATCHBUF_SIZE);
+       BUILD_BUG_ON(sizeof(struct iwl_tx_cmd) < IWL_FIRST_TB_SIZE);
 
        /* map the data for TB1 */
-       tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_HCMD_SCRATCHBUF_SIZE;
+       tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_FIRST_TB_SIZE;
        tb1_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE);
        if (unlikely(dma_mapping_error(trans->dev, tb1_phys)))
                goto out_err;
index c95bf6d..c753e36 100644 (file)
@@ -27,6 +27,8 @@
 void lbs_mac_event_disconnected(struct lbs_private *priv,
                                bool locally_generated)
 {
+       unsigned long flags;
+
        if (priv->connect_status != LBS_CONNECTED)
                return;
 
@@ -46,9 +48,11 @@ void lbs_mac_event_disconnected(struct lbs_private *priv,
        netif_carrier_off(priv->dev);
 
        /* Free Tx and Rx packets */
+       spin_lock_irqsave(&priv->driver_lock, flags);
        kfree_skb(priv->currenttxskb);
        priv->currenttxskb = NULL;
        priv->tx_pending_len = 0;
+       spin_unlock_irqrestore(&priv->driver_lock, flags);
 
        priv->connect_status = LBS_DISCONNECTED;
 
index 75bf0c8..54e426c 100644 (file)
@@ -735,7 +735,7 @@ EXPORT_SYMBOL_GPL(lbtf_bcn_sent);
 static int __init lbtf_init_module(void)
 {
        lbtf_deb_enter(LBTF_DEB_MAIN);
-       lbtf_wq = create_workqueue("libertastf");
+       lbtf_wq = alloc_workqueue("libertastf", WQ_MEM_RECLAIM, 0);
        if (lbtf_wq == NULL) {
                printk(KERN_ERR "libertastf: couldn't create workqueue\n");
                return -ENOMEM;
index ff948a9..867ab81 100644 (file)
@@ -376,6 +376,29 @@ mwifiex_cfg80211_set_tx_power(struct wiphy *wiphy,
        return mwifiex_set_tx_power(priv, &power_cfg);
 }
 
+/*
+ * CFG802.11 operation handler to get Tx power.
+ */
+static int
+mwifiex_cfg80211_get_tx_power(struct wiphy *wiphy,
+                             struct wireless_dev *wdev,
+                             int *dbm)
+{
+       struct mwifiex_adapter *adapter = mwifiex_cfg80211_get_adapter(wiphy);
+       struct mwifiex_private *priv = mwifiex_get_priv(adapter,
+                                                       MWIFIEX_BSS_ROLE_ANY);
+       int ret = mwifiex_send_cmd(priv, HostCmd_CMD_RF_TX_PWR,
+                                  HostCmd_ACT_GEN_GET, 0, NULL, true);
+
+       if (ret < 0)
+               return ret;
+
+       /* tx_power_level is set in HostCmd_CMD_RF_TX_PWR command handler */
+       *dbm = priv->tx_power_level;
+
+       return 0;
+}
+
 /*
  * CFG802.11 operation handler to set Power Save option.
  *
@@ -1672,6 +1695,9 @@ static int mwifiex_cfg80211_change_beacon(struct wiphy *wiphy,
                                          struct cfg80211_beacon_data *data)
 {
        struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
+       struct mwifiex_adapter *adapter = priv->adapter;
+
+       mwifiex_cancel_scan(adapter);
 
        if (GET_BSS_ROLE(priv) != MWIFIEX_BSS_ROLE_UAP) {
                mwifiex_dbg(priv->adapter, ERROR,
@@ -1804,6 +1830,21 @@ mwifiex_cfg80211_set_antenna(struct wiphy *wiphy, u32 tx_ant, u32 rx_ant)
                                HostCmd_ACT_GEN_SET, 0, &ant_cfg, true);
 }
 
+static int
+mwifiex_cfg80211_get_antenna(struct wiphy *wiphy, u32 *tx_ant, u32 *rx_ant)
+{
+       struct mwifiex_adapter *adapter = mwifiex_cfg80211_get_adapter(wiphy);
+       struct mwifiex_private *priv = mwifiex_get_priv(adapter,
+                                                       MWIFIEX_BSS_ROLE_ANY);
+       mwifiex_send_cmd(priv, HostCmd_CMD_RF_ANTENNA,
+                        HostCmd_ACT_GEN_GET, 0, NULL, true);
+
+       *tx_ant = priv->tx_ant;
+       *rx_ant = priv->rx_ant;
+
+       return 0;
+}
+
 /* cfg80211 operation handler for stop ap.
  * Function stops BSS running at uAP interface.
  */
@@ -2209,6 +2250,9 @@ mwifiex_cfg80211_connect(struct wiphy *wiphy, struct net_device *dev,
                return -EALREADY;
        }
 
+       if (priv->scan_block)
+               priv->scan_block = false;
+
        if (adapter->surprise_removed || adapter->is_cmd_timedout) {
                mwifiex_dbg(adapter, ERROR,
                            "%s: Ignore connection.\t"
@@ -2427,6 +2471,9 @@ mwifiex_cfg80211_scan(struct wiphy *wiphy,
                return -EBUSY;
        }
 
+       if (!priv->wdev.current_bss && priv->scan_block)
+               priv->scan_block = false;
+
        if (!mwifiex_stop_bg_scan(priv))
                cfg80211_sched_scan_stopped_rtnl(priv->wdev.wiphy);
 
@@ -2734,6 +2781,7 @@ struct wireless_dev *mwifiex_add_virtual_intf(struct wiphy *wiphy,
        struct mwifiex_private *priv;
        struct net_device *dev;
        void *mdev_priv;
+       int ret;
 
        if (!adapter)
                return ERR_PTR(-EFAULT);
@@ -2859,6 +2907,15 @@ struct wireless_dev *mwifiex_add_virtual_intf(struct wiphy *wiphy,
        mwifiex_init_priv_params(priv, dev);
        priv->netdev = dev;
 
+       ret = mwifiex_send_cmd(priv, HostCmd_CMD_SET_BSS_MODE,
+                              HostCmd_ACT_GEN_SET, 0, NULL, true);
+       if (ret)
+               return ERR_PTR(ret);
+
+       ret = mwifiex_sta_init_cmd(priv, false, false);
+       if (ret)
+               return ERR_PTR(ret);
+
        mwifiex_setup_ht_caps(&wiphy->bands[NL80211_BAND_2GHZ]->ht_cap, priv);
        if (adapter->is_hw_11ac_capable)
                mwifiex_setup_vht_caps(
@@ -3262,7 +3319,10 @@ static int mwifiex_cfg80211_suspend(struct wiphy *wiphy,
        struct mwifiex_ds_hs_cfg hs_cfg;
        int i, ret = 0, retry_num = 10;
        struct mwifiex_private *priv;
+       struct mwifiex_private *sta_priv =
+                       mwifiex_get_priv(adapter, MWIFIEX_BSS_ROLE_STA);
 
+       sta_priv->scan_aborting = true;
        for (i = 0; i < adapter->priv_num; i++) {
                priv = adapter->priv[i];
                mwifiex_abort_cac(priv);
@@ -3291,21 +3351,21 @@ static int mwifiex_cfg80211_suspend(struct wiphy *wiphy,
        if (!wowlan) {
                mwifiex_dbg(adapter, ERROR,
                            "None of the WOWLAN triggers enabled\n");
-               return 0;
+               ret = 0;
+               goto done;
        }
 
-       priv = mwifiex_get_priv(adapter, MWIFIEX_BSS_ROLE_STA);
-
-       if (!priv->media_connected && !wowlan->nd_config) {
+       if (!sta_priv->media_connected && !wowlan->nd_config) {
                mwifiex_dbg(adapter, ERROR,
                            "Can not configure WOWLAN in disconnected state\n");
-               return 0;
+               ret = 0;
+               goto done;
        }
 
-       ret = mwifiex_set_mef_filter(priv, wowlan);
+       ret = mwifiex_set_mef_filter(sta_priv, wowlan);
        if (ret) {
                mwifiex_dbg(adapter, ERROR, "Failed to set MEF filter\n");
-               return ret;
+               goto done;
        }
 
        memset(&hs_cfg, 0, sizeof(hs_cfg));
@@ -3314,26 +3374,25 @@ static int mwifiex_cfg80211_suspend(struct wiphy *wiphy,
        if (wowlan->nd_config) {
                mwifiex_dbg(adapter, INFO, "Wake on net detect\n");
                hs_cfg.conditions |= HS_CFG_COND_MAC_EVENT;
-               mwifiex_cfg80211_sched_scan_start(wiphy, priv->netdev,
+               mwifiex_cfg80211_sched_scan_start(wiphy, sta_priv->netdev,
                                                  wowlan->nd_config);
        }
 
        if (wowlan->disconnect) {
                hs_cfg.conditions |= HS_CFG_COND_MAC_EVENT;
-               mwifiex_dbg(priv->adapter, INFO, "Wake on device disconnect\n");
+               mwifiex_dbg(sta_priv->adapter, INFO, "Wake on device disconnect\n");
        }
 
        hs_cfg.is_invoke_hostcmd = false;
        hs_cfg.gpio = adapter->hs_cfg.gpio;
        hs_cfg.gap = adapter->hs_cfg.gap;
-       ret = mwifiex_set_hs_params(priv, HostCmd_ACT_GEN_SET,
+       ret = mwifiex_set_hs_params(sta_priv, HostCmd_ACT_GEN_SET,
                                    MWIFIEX_SYNC_CMD, &hs_cfg);
-       if (ret) {
-               mwifiex_dbg(adapter, ERROR,
-                           "Failed to set HS params\n");
-               return ret;
-       }
+       if (ret)
+               mwifiex_dbg(adapter, ERROR, "Failed to set HS params\n");
 
+done:
+       sta_priv->scan_aborting = false;
        return ret;
 }
 
@@ -3940,12 +3999,14 @@ static struct cfg80211_ops mwifiex_cfg80211_ops = {
        .set_default_key = mwifiex_cfg80211_set_default_key,
        .set_power_mgmt = mwifiex_cfg80211_set_power_mgmt,
        .set_tx_power = mwifiex_cfg80211_set_tx_power,
+       .get_tx_power = mwifiex_cfg80211_get_tx_power,
        .set_bitrate_mask = mwifiex_cfg80211_set_bitrate_mask,
        .start_ap = mwifiex_cfg80211_start_ap,
        .stop_ap = mwifiex_cfg80211_stop_ap,
        .change_beacon = mwifiex_cfg80211_change_beacon,
        .set_cqm_rssi_config = mwifiex_cfg80211_set_cqm_rssi_config,
        .set_antenna = mwifiex_cfg80211_set_antenna,
+       .get_antenna = mwifiex_cfg80211_get_antenna,
        .del_station = mwifiex_cfg80211_del_station,
        .sched_scan_start = mwifiex_cfg80211_sched_scan_start,
        .sched_scan_stop = mwifiex_cfg80211_sched_scan_stop,
index e7a2144..c29f26d 100644 (file)
@@ -1020,8 +1020,6 @@ mwifiex_cancel_all_pending_cmd(struct mwifiex_adapter *adapter)
 {
        struct cmd_ctrl_node *cmd_node = NULL, *tmp_node;
        unsigned long flags, cmd_flags;
-       struct mwifiex_private *priv;
-       int i;
 
        spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
        /* Cancel current cmd */
@@ -1046,27 +1044,7 @@ mwifiex_cancel_all_pending_cmd(struct mwifiex_adapter *adapter)
        spin_unlock_irqrestore(&adapter->cmd_pending_q_lock, flags);
        spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
 
-       mwifiex_cancel_pending_scan_cmd(adapter);
-
-       if (adapter->scan_processing) {
-               spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
-               adapter->scan_processing = false;
-               spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
-               for (i = 0; i < adapter->priv_num; i++) {
-                       priv = adapter->priv[i];
-                       if (!priv)
-                               continue;
-                       if (priv->scan_request) {
-                               struct cfg80211_scan_info info = {
-                                       .aborted = true,
-                               };
-
-                               mwifiex_dbg(adapter, WARN, "info: aborting scan\n");
-                               cfg80211_scan_done(priv->scan_request, &info);
-                               priv->scan_request = NULL;
-                       }
-               }
-       }
+       mwifiex_cancel_scan(adapter);
 }
 
 /*
@@ -1084,8 +1062,6 @@ mwifiex_cancel_pending_ioctl(struct mwifiex_adapter *adapter)
 {
        struct cmd_ctrl_node *cmd_node = NULL;
        unsigned long cmd_flags;
-       struct mwifiex_private *priv;
-       int i;
 
        if ((adapter->curr_cmd) &&
            (adapter->curr_cmd->wait_q_enabled)) {
@@ -1105,27 +1081,7 @@ mwifiex_cancel_pending_ioctl(struct mwifiex_adapter *adapter)
                mwifiex_recycle_cmd_node(adapter, cmd_node);
        }
 
-       mwifiex_cancel_pending_scan_cmd(adapter);
-
-       if (adapter->scan_processing) {
-               spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
-               adapter->scan_processing = false;
-               spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
-               for (i = 0; i < adapter->priv_num; i++) {
-                       priv = adapter->priv[i];
-                       if (!priv)
-                               continue;
-                       if (priv->scan_request) {
-                               struct cfg80211_scan_info info = {
-                                       .aborted = true,
-                               };
-
-                               mwifiex_dbg(adapter, WARN, "info: aborting scan\n");
-                               cfg80211_scan_done(priv->scan_request, &info);
-                               priv->scan_request = NULL;
-                       }
-               }
-       }
+       mwifiex_cancel_scan(adapter);
 }
 
 /*
index 8e4145a..5596b6b 100644 (file)
@@ -462,6 +462,9 @@ enum P2P_MODES {
 #define HostCmd_ACT_SET_RX              0x0001
 #define HostCmd_ACT_SET_TX              0x0002
 #define HostCmd_ACT_SET_BOTH            0x0003
+#define HostCmd_ACT_GET_RX              0x0004
+#define HostCmd_ACT_GET_TX              0x0008
+#define HostCmd_ACT_GET_BOTH            0x000c
 
 #define RF_ANTENNA_AUTO                 0xFFFF
 
@@ -1958,8 +1961,8 @@ struct mwifiex_ie_types_btcoex_scan_time {
        struct mwifiex_ie_types_header header;
        u8 coex_scan;
        u8 reserved;
-       u16 min_scan_time;
-       u16 max_scan_time;
+       __le16 min_scan_time;
+       __le16 max_scan_time;
 } __packed;
 
 struct mwifiex_ie_types_btcoex_aggr_win_size {
index a6d86d4..1489c90 100644 (file)
@@ -60,7 +60,7 @@ static void wakeup_timer_fn(unsigned long data)
        adapter->hw_status = MWIFIEX_HW_STATUS_RESET;
        mwifiex_cancel_all_pending_cmd(adapter);
 
-       if (adapter->if_ops.card_reset)
+       if (adapter->if_ops.card_reset && !adapter->hs_activated)
                adapter->if_ops.card_reset(adapter);
 }
 
@@ -110,6 +110,8 @@ int mwifiex_init_priv(struct mwifiex_private *priv)
        priv->tx_power_level = 0;
        priv->max_tx_power_level = 0;
        priv->min_tx_power_level = 0;
+       priv->tx_ant = 0;
+       priv->rx_ant = 0;
        priv->tx_rate = 0;
        priv->rxpd_htinfo = 0;
        priv->rxpd_rate = 0;
index a5a48c1..f5b8fd1 100644 (file)
@@ -83,6 +83,8 @@ struct wep_key {
 #define MWIFIEX_AUTH_MODE_AUTO  0xFF
 #define BAND_CONFIG_BG          0x00
 #define BAND_CONFIG_A           0x01
+#define MWIFIEX_SEC_CHAN_BELOW 0x30
+#define MWIFIEX_SEC_CHAN_ABOVE 0x10
 #define MWIFIEX_SUPPORTED_RATES                 14
 #define MWIFIEX_SUPPORTED_RATES_EXT             32
 #define MWIFIEX_TDLS_SUPPORTED_RATES           8
index a4b773d..1c7b006 100644 (file)
@@ -647,6 +647,12 @@ int mwifiex_ret_802_11_associate(struct mwifiex_private *priv,
        const u8 *ie_ptr;
        struct ieee80211_ht_operation *assoc_resp_ht_oper;
 
+       if (!priv->attempted_bss_desc) {
+               mwifiex_dbg(priv->adapter, ERROR,
+                           "ASSOC_RESP: failed, association terminated by host\n");
+               goto done;
+       }
+
        assoc_rsp = (struct ieee_types_assoc_rsp *) &resp->params;
 
        cap_info = le16_to_cpu(assoc_rsp->cap_info_bitmap);
@@ -1270,6 +1276,12 @@ int mwifiex_ret_802_11_ad_hoc(struct mwifiex_private *priv,
        u16 cmd = le16_to_cpu(resp->command);
        u8 result;
 
+       if (!priv->attempted_bss_desc) {
+               mwifiex_dbg(priv->adapter, ERROR,
+                           "ADHOC_RESP: failed, association terminated by host\n");
+               goto done;
+       }
+
        if (cmd == HostCmd_CMD_802_11_AD_HOC_START)
                result = start_result->result;
        else
index f0cd055..9f6bb40 100644 (file)
@@ -533,6 +533,8 @@ struct mwifiex_private {
        u16 tx_power_level;
        u8 max_tx_power_level;
        u8 min_tx_power_level;
+       u32 tx_ant;
+       u32 rx_ant;
        u8 tx_rate;
        u8 tx_htinfo;
        u8 rxpd_htinfo;
@@ -1054,6 +1056,7 @@ int mwifiex_free_cmd_buffer(struct mwifiex_adapter *adapter);
 void mwifiex_cancel_all_pending_cmd(struct mwifiex_adapter *adapter);
 void mwifiex_cancel_pending_ioctl(struct mwifiex_adapter *adapter);
 void mwifiex_cancel_pending_scan_cmd(struct mwifiex_adapter *adapter);
+void mwifiex_cancel_scan(struct mwifiex_adapter *adapter);
 
 void mwifiex_recycle_cmd_node(struct mwifiex_adapter *adapter,
                              struct cmd_ctrl_node *cmd_node);
index 1b1e266..22fe993 100644 (file)
@@ -440,6 +440,11 @@ static int mwifiex_pcie_disable_host_int(struct mwifiex_adapter *adapter)
        return 0;
 }
 
+static void mwifiex_pcie_disable_host_int_noerr(struct mwifiex_adapter *adapter)
+{
+       WARN_ON(mwifiex_pcie_disable_host_int(adapter));
+}
+
 /*
  * This function enables the host interrupt.
  *
@@ -2086,6 +2091,13 @@ static void mwifiex_interrupt_status(struct mwifiex_adapter *adapter,
        unsigned long flags;
        struct pcie_service_card *card = adapter->card;
 
+       if (card->msi_enable) {
+               spin_lock_irqsave(&adapter->int_lock, flags);
+               adapter->int_status = 1;
+               spin_unlock_irqrestore(&adapter->int_lock, flags);
+               return;
+       }
+
        if (!mwifiex_pcie_ok_to_access_hw(adapter))
                return;
 
@@ -2187,15 +2199,44 @@ exit:
 static int mwifiex_process_pcie_int(struct mwifiex_adapter *adapter)
 {
        int ret;
-       u32 pcie_ireg;
+       u32 pcie_ireg = 0;
        unsigned long flags;
+       struct pcie_service_card *card = adapter->card;
 
        spin_lock_irqsave(&adapter->int_lock, flags);
-       /* Clear out unused interrupts */
-       pcie_ireg = adapter->int_status;
+       if (!card->msi_enable) {
+               /* Clear out unused interrupts */
+               pcie_ireg = adapter->int_status;
+       }
        adapter->int_status = 0;
        spin_unlock_irqrestore(&adapter->int_lock, flags);
 
+       if (card->msi_enable) {
+               if (mwifiex_pcie_ok_to_access_hw(adapter)) {
+                       if (mwifiex_read_reg(adapter, PCIE_HOST_INT_STATUS,
+                                            &pcie_ireg)) {
+                               mwifiex_dbg(adapter, ERROR,
+                                           "Read register failed\n");
+                               return -1;
+                       }
+
+                       if ((pcie_ireg != 0xFFFFFFFF) && (pcie_ireg)) {
+                               if (mwifiex_write_reg(adapter,
+                                                     PCIE_HOST_INT_STATUS,
+                                                     ~pcie_ireg)) {
+                                       mwifiex_dbg(adapter, ERROR,
+                                                   "Write register failed\n");
+                                       return -1;
+                               }
+                               if (!adapter->pps_uapsd_mode &&
+                                   adapter->ps_state == PS_STATE_SLEEP) {
+                                       adapter->ps_state = PS_STATE_AWAKE;
+                                       adapter->pm_wakeup_fw_try = false;
+                                       del_timer(&adapter->wakeup_timer);
+                               }
+                       }
+               }
+       }
        while (pcie_ireg & HOST_INTR_MASK) {
                if (pcie_ireg & HOST_INTR_DNLD_DONE) {
                        pcie_ireg &= ~HOST_INTR_DNLD_DONE;
@@ -2235,6 +2276,12 @@ static int mwifiex_process_pcie_int(struct mwifiex_adapter *adapter)
                                return ret;
                }
 
+               if (card->msi_enable) {
+                       spin_lock_irqsave(&adapter->int_lock, flags);
+                       adapter->int_status = 0;
+                       spin_unlock_irqrestore(&adapter->int_lock, flags);
+               }
+
                if (mwifiex_pcie_ok_to_access_hw(adapter)) {
                        if (mwifiex_read_reg(adapter, PCIE_HOST_INT_STATUS,
                                             &pcie_ireg)) {
@@ -2258,7 +2305,7 @@ static int mwifiex_process_pcie_int(struct mwifiex_adapter *adapter)
        mwifiex_dbg(adapter, INTR,
                    "info: cmd_sent=%d data_sent=%d\n",
                    adapter->cmd_sent, adapter->data_sent);
-       if (adapter->ps_state != PS_STATE_SLEEP)
+       if (!card->msi_enable && adapter->ps_state != PS_STATE_SLEEP)
                mwifiex_pcie_enable_host_int(adapter);
 
        return 0;
@@ -2841,20 +2888,20 @@ static void mwifiex_pcie_get_fw_name(struct mwifiex_adapter *adapter)
                version &= 0x7;
                switch (revision_id) {
                case PCIE8997_V2:
-                       if (version == CHIP_VER_PCIEUSB)
+                       if (version == CHIP_VER_PCIEUART)
                                strcpy(adapter->fw_name,
-                                      PCIEUSB8997_FW_NAME_V2);
+                                      PCIEUART8997_FW_NAME_V2);
                        else
                                strcpy(adapter->fw_name,
-                                      PCIEUART8997_FW_NAME_V2);
+                                      PCIEUSB8997_FW_NAME_V2);
                        break;
                case PCIE8997_Z:
-                       if (version == CHIP_VER_PCIEUSB)
+                       if (version == CHIP_VER_PCIEUART)
                                strcpy(adapter->fw_name,
-                                      PCIEUSB8997_FW_NAME_Z);
+                                      PCIEUART8997_FW_NAME_Z);
                        else
                                strcpy(adapter->fw_name,
-                                      PCIEUART8997_FW_NAME_Z);
+                                      PCIEUSB8997_FW_NAME_Z);
                        break;
                default:
                        strcpy(adapter->fw_name, PCIE8997_DEFAULT_FW_NAME);
@@ -2946,6 +2993,7 @@ static struct mwifiex_if_ops pcie_ops = {
        .register_dev =                 mwifiex_register_dev,
        .unregister_dev =               mwifiex_unregister_dev,
        .enable_int =                   mwifiex_pcie_enable_host_int,
+       .disable_int =                  mwifiex_pcie_disable_host_int_noerr,
        .process_int_status =           mwifiex_process_int_status,
        .host_to_card =                 mwifiex_pcie_host_to_card,
        .wakeup =                       mwifiex_pm_wakeup_card,
index 2592e63..f05061c 100644 (file)
@@ -32,7 +32,7 @@
 #define PCIE8897_DEFAULT_FW_NAME "mrvl/pcie8897_uapsta.bin"
 #define PCIE8897_A0_FW_NAME "mrvl/pcie8897_uapsta_a0.bin"
 #define PCIE8897_B0_FW_NAME "mrvl/pcie8897_uapsta.bin"
-#define PCIE8997_DEFAULT_FW_NAME "mrvl/pcieuart8997_combo_v2.bin"
+#define PCIE8997_DEFAULT_FW_NAME "mrvl/pcieusb8997_combo_v2.bin"
 #define PCIEUART8997_FW_NAME_Z "mrvl/pcieuart8997_combo.bin"
 #define PCIEUART8997_FW_NAME_V2 "mrvl/pcieuart8997_combo_v2.bin"
 #define PCIEUSB8997_FW_NAME_Z "mrvl/pcieusb8997_combo.bin"
@@ -48,7 +48,7 @@
 #define PCIE8897_B0    0x1200
 #define PCIE8997_Z     0x0
 #define PCIE8997_V2    0x471
-#define CHIP_VER_PCIEUSB       0x2
+#define CHIP_VER_PCIEUART      0x3
 
 /* Constants for Buffer Descriptor (BD) rings */
 #define MWIFIEX_MAX_TXRX_BD                    0x20
@@ -258,7 +258,7 @@ static const struct mwifiex_pcie_card_reg mwifiex_reg_8997 = {
        .fw_dump_end = 0xcff,
        .fw_dump_host_ready = 0xcc,
        .fw_dump_read_done = 0xdd,
-       .msix_support = 1,
+       .msix_support = 0,
 };
 
 static struct memory_type_mapping mem_type_mapping_tbl_w8897[] = {
index fdd7491..21ec847 100644 (file)
@@ -1896,7 +1896,8 @@ mwifiex_active_scan_req_for_passive_chan(struct mwifiex_private *priv)
        u8 id = 0;
        struct mwifiex_user_scan_cfg  *user_scan_cfg;
 
-       if (adapter->active_scan_triggered || !priv->scan_request) {
+       if (adapter->active_scan_triggered || !priv->scan_request ||
+           priv->scan_aborting) {
                adapter->active_scan_triggered = false;
                return 0;
        }
@@ -1964,6 +1965,7 @@ static void mwifiex_check_next_scan_command(struct mwifiex_private *priv)
                                    "info: notifying scan done\n");
                        cfg80211_scan_done(priv->scan_request, &info);
                        priv->scan_request = NULL;
+                       priv->scan_aborting = false;
                } else {
                        priv->scan_aborting = false;
                        mwifiex_dbg(adapter, INFO,
@@ -1989,6 +1991,7 @@ static void mwifiex_check_next_scan_command(struct mwifiex_private *priv)
                                            "info: aborting scan\n");
                                cfg80211_scan_done(priv->scan_request, &info);
                                priv->scan_request = NULL;
+                               priv->scan_aborting = false;
                        } else {
                                priv->scan_aborting = false;
                                mwifiex_dbg(adapter, INFO,
@@ -2009,6 +2012,37 @@ static void mwifiex_check_next_scan_command(struct mwifiex_private *priv)
        return;
 }
 
+void mwifiex_cancel_scan(struct mwifiex_adapter *adapter)
+{
+       struct mwifiex_private *priv;
+       unsigned long cmd_flags;
+       int i;
+
+       mwifiex_cancel_pending_scan_cmd(adapter);
+
+       if (adapter->scan_processing) {
+               spin_lock_irqsave(&adapter->mwifiex_cmd_lock, cmd_flags);
+               adapter->scan_processing = false;
+               spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, cmd_flags);
+               for (i = 0; i < adapter->priv_num; i++) {
+                       priv = adapter->priv[i];
+                       if (!priv)
+                               continue;
+                       if (priv->scan_request) {
+                               struct cfg80211_scan_info info = {
+                                       .aborted = true,
+                               };
+
+                               mwifiex_dbg(adapter, INFO,
+                                           "info: aborting scan\n");
+                               cfg80211_scan_done(priv->scan_request, &info);
+                               priv->scan_request = NULL;
+                               priv->scan_aborting = false;
+                       }
+               }
+       }
+}
+
 /*
  * This function handles the command response of scan.
  *
index e436574..8c65849 100644 (file)
@@ -313,23 +313,41 @@ static int mwifiex_cmd_rf_antenna(struct mwifiex_private *priv,
 
        cmd->command = cpu_to_le16(HostCmd_CMD_RF_ANTENNA);
 
-       if (cmd_action != HostCmd_ACT_GEN_SET)
-               return 0;
-
-       if (priv->adapter->hw_dev_mcs_support == HT_STREAM_2X2) {
-               cmd->size = cpu_to_le16(sizeof(struct host_cmd_ds_rf_ant_mimo) +
-                                       S_DS_GEN);
-               ant_mimo->action_tx = cpu_to_le16(HostCmd_ACT_SET_TX);
-               ant_mimo->tx_ant_mode = cpu_to_le16((u16)ant_cfg->tx_ant);
-               ant_mimo->action_rx = cpu_to_le16(HostCmd_ACT_SET_RX);
-               ant_mimo->rx_ant_mode = cpu_to_le16((u16)ant_cfg->rx_ant);
-       } else {
-               cmd->size = cpu_to_le16(sizeof(struct host_cmd_ds_rf_ant_siso) +
-                                       S_DS_GEN);
-               ant_siso->action = cpu_to_le16(HostCmd_ACT_SET_BOTH);
-               ant_siso->ant_mode = cpu_to_le16((u16)ant_cfg->tx_ant);
+       switch (cmd_action) {
+       case HostCmd_ACT_GEN_SET:
+               if (priv->adapter->hw_dev_mcs_support == HT_STREAM_2X2) {
+                       cmd->size = cpu_to_le16(sizeof(struct
+                                               host_cmd_ds_rf_ant_mimo)
+                                               + S_DS_GEN);
+                       ant_mimo->action_tx = cpu_to_le16(HostCmd_ACT_SET_TX);
+                       ant_mimo->tx_ant_mode = cpu_to_le16((u16)ant_cfg->
+                                                           tx_ant);
+                       ant_mimo->action_rx = cpu_to_le16(HostCmd_ACT_SET_RX);
+                       ant_mimo->rx_ant_mode = cpu_to_le16((u16)ant_cfg->
+                                                           rx_ant);
+               } else {
+                       cmd->size = cpu_to_le16(sizeof(struct
+                                               host_cmd_ds_rf_ant_siso) +
+                                               S_DS_GEN);
+                       ant_siso->action = cpu_to_le16(HostCmd_ACT_SET_BOTH);
+                       ant_siso->ant_mode = cpu_to_le16((u16)ant_cfg->tx_ant);
+               }
+               break;
+       case HostCmd_ACT_GEN_GET:
+               if (priv->adapter->hw_dev_mcs_support == HT_STREAM_2X2) {
+                       cmd->size = cpu_to_le16(sizeof(struct
+                                               host_cmd_ds_rf_ant_mimo) +
+                                               S_DS_GEN);
+                       ant_mimo->action_tx = cpu_to_le16(HostCmd_ACT_GET_TX);
+                       ant_mimo->action_rx = cpu_to_le16(HostCmd_ACT_GET_RX);
+               } else {
+                       cmd->size = cpu_to_le16(sizeof(struct
+                                               host_cmd_ds_rf_ant_siso) +
+                                               S_DS_GEN);
+                       ant_siso->action = cpu_to_le16(HostCmd_ACT_GET_BOTH);
+               }
+               break;
        }
-
        return 0;
 }
 
index bcfd4b7..9050d06 100644 (file)
@@ -469,7 +469,9 @@ static int mwifiex_ret_rf_antenna(struct mwifiex_private *priv,
        struct host_cmd_ds_rf_ant_siso *ant_siso = &resp->params.ant_siso;
        struct mwifiex_adapter *adapter = priv->adapter;
 
-       if (adapter->hw_dev_mcs_support == HT_STREAM_2X2)
+       if (adapter->hw_dev_mcs_support == HT_STREAM_2X2) {
+               priv->tx_ant = le16_to_cpu(ant_mimo->tx_ant_mode);
+               priv->rx_ant = le16_to_cpu(ant_mimo->rx_ant_mode);
                mwifiex_dbg(adapter, INFO,
                            "RF_ANT_RESP: Tx action = 0x%x, Tx Mode = 0x%04x\t"
                            "Rx action = 0x%x, Rx Mode = 0x%04x\n",
@@ -477,12 +479,14 @@ static int mwifiex_ret_rf_antenna(struct mwifiex_private *priv,
                            le16_to_cpu(ant_mimo->tx_ant_mode),
                            le16_to_cpu(ant_mimo->action_rx),
                            le16_to_cpu(ant_mimo->rx_ant_mode));
-       else
+       } else {
+               priv->tx_ant = le16_to_cpu(ant_siso->ant_mode);
+               priv->rx_ant = le16_to_cpu(ant_siso->ant_mode);
                mwifiex_dbg(adapter, INFO,
                            "RF_ANT_RESP: action = 0x%x, Mode = 0x%04x\n",
                            le16_to_cpu(ant_siso->action),
                            le16_to_cpu(ant_siso->ant_mode));
-
+       }
        return 0;
 }
 
index 0cefd40..a422f33 100644 (file)
@@ -474,8 +474,8 @@ void mwifiex_bt_coex_wlan_param_update_event(struct mwifiex_private *priv,
                        scantlv =
                            (struct mwifiex_ie_types_btcoex_scan_time *)tlv;
                        adapter->coex_scan = scantlv->coex_scan;
-                       adapter->coex_min_scan_time = scantlv->min_scan_time;
-                       adapter->coex_max_scan_time = scantlv->max_scan_time;
+                       adapter->coex_min_scan_time = le16_to_cpu(scantlv->min_scan_time);
+                       adapter->coex_max_scan_time = le16_to_cpu(scantlv->max_scan_time);
                        break;
 
                default:
@@ -708,7 +708,7 @@ int mwifiex_process_sta_event(struct mwifiex_private *priv)
 
        case EVENT_EXT_SCAN_REPORT:
                mwifiex_dbg(adapter, EVENT, "event: EXT_SCAN Report\n");
-               if (adapter->ext_scan)
+               if (adapter->ext_scan && !priv->scan_aborting)
                        ret = mwifiex_handle_event_ext_scan_report(priv,
                                                adapter->event_skb->data);
 
index 8e08626..2ba5397 100644 (file)
@@ -426,6 +426,10 @@ done:
        if (bss_desc)
                kfree(bss_desc->beacon_buf);
        kfree(bss_desc);
+
+       if (ret < 0)
+               priv->attempted_bss_desc = NULL;
+
        return ret;
 }
 
index f79d00d..a7e9f54 100644 (file)
@@ -19,6 +19,7 @@
 
 #include "main.h"
 #include "11ac.h"
+#include "11n.h"
 
 /* This function parses security related parameters from cfg80211_ap_settings
  * and sets into FW understandable bss_config structure.
@@ -521,9 +522,9 @@ mwifiex_uap_bss_param_prepare(u8 *tlv, void *cmd_buf, u16 *param_size)
                tlv += sizeof(struct host_cmd_tlv_rates) + i;
        }
        if (bss_cfg->channel &&
-           ((bss_cfg->band_cfg == BAND_CONFIG_BG &&
+           (((bss_cfg->band_cfg & BIT(0)) == BAND_CONFIG_BG &&
              bss_cfg->channel <= MAX_CHANNEL_BAND_BG) ||
-           (bss_cfg->band_cfg == BAND_CONFIG_A &&
+           ((bss_cfg->band_cfg & BIT(0)) == BAND_CONFIG_A &&
             bss_cfg->channel <= MAX_CHANNEL_BAND_A))) {
                chan_band = (struct host_cmd_tlv_channel_band *)tlv;
                chan_band->header.type = cpu_to_le16(TLV_TYPE_CHANNELBANDLIST);
@@ -833,6 +834,31 @@ void mwifiex_uap_set_channel(struct mwifiex_private *priv,
                        config_bands |= BAND_AAC;
        }
 
+       switch (chandef.width) {
+       case NL80211_CHAN_WIDTH_5:
+       case NL80211_CHAN_WIDTH_10:
+       case NL80211_CHAN_WIDTH_20_NOHT:
+       case NL80211_CHAN_WIDTH_20:
+               break;
+       case NL80211_CHAN_WIDTH_40:
+               if (chandef.center_freq1 < chandef.chan->center_freq)
+                       bss_cfg->band_cfg |= MWIFIEX_SEC_CHAN_BELOW;
+               else
+                       bss_cfg->band_cfg |= MWIFIEX_SEC_CHAN_ABOVE;
+               break;
+       case NL80211_CHAN_WIDTH_80:
+       case NL80211_CHAN_WIDTH_80P80:
+       case NL80211_CHAN_WIDTH_160:
+               bss_cfg->band_cfg |=
+                   mwifiex_get_sec_chan_offset(bss_cfg->channel) << 4;
+               break;
+       default:
+               mwifiex_dbg(priv->adapter,
+                           WARN, "Unknown channel width: %d\n",
+                           chandef.width);
+               break;
+       }
+
        priv->adapter->config_bands = config_bands;
 
        if (old_bands != config_bands) {
index 870c9cd..4341d56 100644 (file)
@@ -135,7 +135,8 @@ struct rtl8xxxu_rxdesc16 {
 
        u32 seq:12;
        u32 frag:4;
-       u32 nextpktlen:14;
+       u32 pkt_cnt:8;
+       u32 reserved:6;
        u32 nextind:1;
        u32 reserved0:1;
 
@@ -198,7 +199,8 @@ struct rtl8xxxu_rxdesc16 {
 
        u32 reserved0:1;
        u32 nextind:1;
-       u32 nextpktlen:14;
+       u32 reserved:6;
+       u32 pkt_cnt:8;
        u32 frag:4;
        u32 seq:12;
 
@@ -1245,6 +1247,7 @@ struct rtl8xxxu_priv {
        u32 ep_tx_normal_queue:1;
        u32 ep_tx_low_queue:1;
        u32 has_xtalk:1;
+       u32 rx_buf_aggregation:1;
        u8 xtalk;
        unsigned int pipe_interrupt;
        unsigned int pipe_in;
@@ -1315,8 +1318,7 @@ struct rtl8xxxu_fileops {
        void (*phy_init_antenna_selection) (struct rtl8xxxu_priv *priv);
        void (*phy_iq_calibrate) (struct rtl8xxxu_priv *priv);
        void (*config_channel) (struct ieee80211_hw *hw);
-       int (*parse_rx_desc) (struct rtl8xxxu_priv *priv, struct sk_buff *skb,
-                             struct ieee80211_rx_status *rx_status);
+       int (*parse_rx_desc) (struct rtl8xxxu_priv *priv, struct sk_buff *skb);
        void (*init_aggregation) (struct rtl8xxxu_priv *priv);
        void (*init_statistics) (struct rtl8xxxu_priv *priv);
        void (*enable_rf) (struct rtl8xxxu_priv *priv);
@@ -1329,6 +1331,7 @@ struct rtl8xxxu_fileops {
        void (*report_connect) (struct rtl8xxxu_priv *priv,
                                u8 macid, bool connect);
        int writeN_block_size;
+       int rx_agg_buf_size;
        char tx_desc_size;
        char rx_desc_size;
        char has_s0s1;
@@ -1409,13 +1412,12 @@ void rtl8xxxu_gen1_report_connect(struct rtl8xxxu_priv *priv,
                                  u8 macid, bool connect);
 void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
                                  u8 macid, bool connect);
+void rtl8xxxu_gen1_init_aggregation(struct rtl8xxxu_priv *priv);
 void rtl8xxxu_gen1_enable_rf(struct rtl8xxxu_priv *priv);
 void rtl8xxxu_gen1_disable_rf(struct rtl8xxxu_priv *priv);
 void rtl8xxxu_gen2_disable_rf(struct rtl8xxxu_priv *priv);
-int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb,
-                           struct ieee80211_rx_status *rx_status);
-int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb,
-                           struct ieee80211_rx_status *rx_status);
+int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb);
+int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb);
 int rtl8xxxu_gen2_channel_to_group(int channel);
 bool rtl8xxxu_gen2_simularity_compare(struct rtl8xxxu_priv *priv,
                                      int result[][8], int c1, int c2);
index 2c86b55..69d1a14 100644 (file)
@@ -413,13 +413,8 @@ static int rtl8192cu_parse_efuse(struct rtl8xxxu_priv *priv)
                dev_info(&priv->udev->dev,
                         "%s: dumping efuse (0x%02zx bytes):\n",
                         __func__, sizeof(struct rtl8192cu_efuse));
-               for (i = 0; i < sizeof(struct rtl8192cu_efuse); i += 8) {
-                       dev_info(&priv->udev->dev, "%02x: "
-                                "%02x %02x %02x %02x %02x %02x %02x %02x\n", i,
-                                raw[i], raw[i + 1], raw[i + 2],
-                                raw[i + 3], raw[i + 4], raw[i + 5],
-                                raw[i + 6], raw[i + 7]);
-               }
+               for (i = 0; i < sizeof(struct rtl8192cu_efuse); i += 8)
+                       dev_info(&priv->udev->dev, "%02x: %8ph\n", i, &raw[i]);
        }
        return 0;
 }
@@ -565,6 +560,7 @@ struct rtl8xxxu_fileops rtl8192cu_fops = {
        .phy_iq_calibrate = rtl8xxxu_gen1_phy_iq_calibrate,
        .config_channel = rtl8xxxu_gen1_config_channel,
        .parse_rx_desc = rtl8xxxu_parse_rxdesc16,
+       .init_aggregation = rtl8xxxu_gen1_init_aggregation,
        .enable_rf = rtl8xxxu_gen1_enable_rf,
        .disable_rf = rtl8xxxu_gen1_disable_rf,
        .usb_quirks = rtl8xxxu_gen1_usb_quirks,
@@ -572,6 +568,7 @@ struct rtl8xxxu_fileops rtl8192cu_fops = {
        .update_rate_mask = rtl8xxxu_update_rate_mask,
        .report_connect = rtl8xxxu_gen1_report_connect,
        .writeN_block_size = 128,
+       .rx_agg_buf_size = 16000,
        .tx_desc_size = sizeof(struct rtl8xxxu_txdesc32),
        .rx_desc_size = sizeof(struct rtl8xxxu_rxdesc16),
        .adda_1t_init = 0x0b1b25a0,
index b04cf30..9a1994f 100644 (file)
@@ -622,13 +622,8 @@ static int rtl8192eu_parse_efuse(struct rtl8xxxu_priv *priv)
                dev_info(&priv->udev->dev,
                         "%s: dumping efuse (0x%02zx bytes):\n",
                         __func__, sizeof(struct rtl8192eu_efuse));
-               for (i = 0; i < sizeof(struct rtl8192eu_efuse); i += 8) {
-                       dev_info(&priv->udev->dev, "%02x: "
-                                "%02x %02x %02x %02x %02x %02x %02x %02x\n", i,
-                                raw[i], raw[i + 1], raw[i + 2],
-                                raw[i + 3], raw[i + 4], raw[i + 5],
-                                raw[i + 6], raw[i + 7]);
-               }
+               for (i = 0; i < sizeof(struct rtl8192eu_efuse); i += 8)
+                       dev_info(&priv->udev->dev, "%02x: %8ph\n", i, &raw[i]);
        }
        return 0;
 }
@@ -1249,11 +1244,9 @@ static void rtl8192eu_phy_iq_calibrate(struct rtl8xxxu_priv *priv)
                reg_e94 = result[i][0];
                reg_e9c = result[i][1];
                reg_ea4 = result[i][2];
-               reg_eac = result[i][3];
                reg_eb4 = result[i][4];
                reg_ebc = result[i][5];
                reg_ec4 = result[i][6];
-               reg_ecc = result[i][7];
        }
 
        if (candidate >= 0) {
index a8e172c..686c551 100644 (file)
@@ -377,6 +377,7 @@ struct rtl8xxxu_fileops rtl8723au_fops = {
        .phy_iq_calibrate = rtl8xxxu_gen1_phy_iq_calibrate,
        .config_channel = rtl8xxxu_gen1_config_channel,
        .parse_rx_desc = rtl8xxxu_parse_rxdesc16,
+       .init_aggregation = rtl8xxxu_gen1_init_aggregation,
        .enable_rf = rtl8xxxu_gen1_enable_rf,
        .disable_rf = rtl8xxxu_gen1_disable_rf,
        .usb_quirks = rtl8xxxu_gen1_usb_quirks,
@@ -384,6 +385,7 @@ struct rtl8xxxu_fileops rtl8723au_fops = {
        .update_rate_mask = rtl8xxxu_update_rate_mask,
        .report_connect = rtl8xxxu_gen1_report_connect,
        .writeN_block_size = 1024,
+       .rx_agg_buf_size = 16000,
        .tx_desc_size = sizeof(struct rtl8xxxu_txdesc32),
        .rx_desc_size = sizeof(struct rtl8xxxu_rxdesc16),
        .adda_1t_init = 0x0b1b25a0,
index 4186e7c..9d45afb 100644 (file)
@@ -466,13 +466,8 @@ static int rtl8723bu_parse_efuse(struct rtl8xxxu_priv *priv)
                dev_info(&priv->udev->dev,
                         "%s: dumping efuse (0x%02zx bytes):\n",
                         __func__, sizeof(struct rtl8723bu_efuse));
-               for (i = 0; i < sizeof(struct rtl8723bu_efuse); i += 8) {
-                       dev_info(&priv->udev->dev, "%02x: "
-                                "%02x %02x %02x %02x %02x %02x %02x %02x\n", i,
-                                raw[i], raw[i + 1], raw[i + 2],
-                                raw[i + 3], raw[i + 4], raw[i + 5],
-                                raw[i + 6], raw[i + 7]);
-               }
+               for (i = 0; i < sizeof(struct rtl8723bu_efuse); i += 8)
+                       dev_info(&priv->udev->dev, "%02x: %8ph\n", i, &raw[i]);
        }
 
        return 0;
index 9f6dbb4..77048db 100644 (file)
@@ -44,6 +44,9 @@
 
 int rtl8xxxu_debug = RTL8XXXU_DEBUG_EFUSE;
 static bool rtl8xxxu_ht40_2g;
+static bool rtl8xxxu_dma_aggregation;
+static int rtl8xxxu_dma_agg_timeout = -1;
+static int rtl8xxxu_dma_agg_pages = -1;
 
 MODULE_AUTHOR("Jes Sorensen <Jes.Sorensen@redhat.com>");
 MODULE_DESCRIPTION("RTL8XXXu USB mac80211 Wireless LAN Driver");
@@ -62,10 +65,14 @@ module_param_named(debug, rtl8xxxu_debug, int, 0600);
 MODULE_PARM_DESC(debug, "Set debug mask");
 module_param_named(ht40_2g, rtl8xxxu_ht40_2g, bool, 0600);
 MODULE_PARM_DESC(ht40_2g, "Enable HT40 support on the 2.4GHz band");
+module_param_named(dma_aggregation, rtl8xxxu_dma_aggregation, bool, 0600);
+MODULE_PARM_DESC(dma_aggregation, "Enable DMA packet aggregation");
+module_param_named(dma_agg_timeout, rtl8xxxu_dma_agg_timeout, int, 0600);
+MODULE_PARM_DESC(dma_agg_timeout, "Set DMA aggregation timeout (range 1-127)");
+module_param_named(dma_agg_pages, rtl8xxxu_dma_agg_pages, int, 0600);
+MODULE_PARM_DESC(dma_agg_pages, "Set DMA aggregation pages (range 1-127, 0 to disable)");
 
 #define USB_VENDOR_ID_REALTEK          0x0bda
-/* Minimum IEEE80211_MAX_FRAME_LEN */
-#define RTL_RX_BUFFER_SIZE             IEEE80211_MAX_FRAME_LEN
 #define RTL8XXXU_RX_URBS               32
 #define RTL8XXXU_RX_URB_PENDING_WATER  8
 #define RTL8XXXU_TX_URBS               64
@@ -4407,6 +4414,73 @@ void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
        rtl8xxxu_gen2_h2c_cmd(priv, &h2c, sizeof(h2c.media_status_rpt));
 }
 
+void rtl8xxxu_gen1_init_aggregation(struct rtl8xxxu_priv *priv)
+{
+       u8 agg_ctrl, usb_spec, page_thresh, timeout;
+
+       usb_spec = rtl8xxxu_read8(priv, REG_USB_SPECIAL_OPTION);
+       usb_spec &= ~USB_SPEC_USB_AGG_ENABLE;
+       rtl8xxxu_write8(priv, REG_USB_SPECIAL_OPTION, usb_spec);
+
+       agg_ctrl = rtl8xxxu_read8(priv, REG_TRXDMA_CTRL);
+       agg_ctrl &= ~TRXDMA_CTRL_RXDMA_AGG_EN;
+
+       if (!rtl8xxxu_dma_aggregation) {
+               rtl8xxxu_write8(priv, REG_TRXDMA_CTRL, agg_ctrl);
+               return;
+       }
+
+       agg_ctrl |= TRXDMA_CTRL_RXDMA_AGG_EN;
+       rtl8xxxu_write8(priv, REG_TRXDMA_CTRL, agg_ctrl);
+
+       /*
+        * The number of packets we can take looks to be buffer size / 512
+        * which matches the 512 byte rounding we have to do when de-muxing
+        * the packets.
+        *
+        * Sample numbers from the vendor driver:
+        * USB High-Speed mode values:
+        *   RxAggBlockCount = 8 : 512 byte unit
+        *   RxAggBlockTimeout = 6
+        *   RxAggPageCount = 48 : 128 byte unit
+        *   RxAggPageTimeout = 4 or 6 (absolute time 34ms/(2^6))
+        */
+
+       page_thresh = (priv->fops->rx_agg_buf_size / 512);
+       if (rtl8xxxu_dma_agg_pages >= 0) {
+               if (rtl8xxxu_dma_agg_pages <= page_thresh)
+                       timeout = page_thresh;
+               else if (rtl8xxxu_dma_agg_pages <= 6)
+                       dev_err(&priv->udev->dev,
+                               "%s: dma_agg_pages=%i too small, minium is 6\n",
+                               __func__, rtl8xxxu_dma_agg_pages);
+               else
+                       dev_err(&priv->udev->dev,
+                               "%s: dma_agg_pages=%i larger than limit %i\n",
+                               __func__, rtl8xxxu_dma_agg_pages, page_thresh);
+       }
+       rtl8xxxu_write8(priv, REG_RXDMA_AGG_PG_TH, page_thresh);
+       /*
+        * REG_RXDMA_AGG_PG_TH + 1 seems to be the timeout register on
+        * gen2 chips and rtl8188eu. The rtl8723au seems unhappy if we
+        * don't set it, so better set both.
+        */
+       timeout = 4;
+
+       if (rtl8xxxu_dma_agg_timeout >= 0) {
+               if (rtl8xxxu_dma_agg_timeout <= 127)
+                       timeout = rtl8xxxu_dma_agg_timeout;
+               else
+                       dev_err(&priv->udev->dev,
+                               "%s: Invalid dma_agg_timeout: %i\n",
+                               __func__, rtl8xxxu_dma_agg_timeout);
+       }
+
+       rtl8xxxu_write8(priv, REG_RXDMA_AGG_PG_TH + 1, timeout);
+       rtl8xxxu_write8(priv, REG_USB_DMA_AGG_TO, timeout);
+       priv->rx_buf_aggregation = 1;
+}
+
 static void rtl8xxxu_set_basic_rates(struct rtl8xxxu_priv *priv, u32 rate_cfg)
 {
        u32 val32;
@@ -5045,55 +5119,143 @@ static void rtl8xxxu_rx_urb_work(struct work_struct *work)
        }
 }
 
-int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb,
-                           struct ieee80211_rx_status *rx_status)
+static void rtl8723bu_handle_c2h(struct rtl8xxxu_priv *priv,
+                                struct sk_buff *skb)
+{
+       struct rtl8723bu_c2h *c2h = (struct rtl8723bu_c2h *)skb->data;
+       struct device *dev = &priv->udev->dev;
+       int len;
+
+       len = skb->len - 2;
+
+       dev_dbg(dev, "C2H ID %02x seq %02x, len %02x source %02x\n",
+               c2h->id, c2h->seq, len, c2h->bt_info.response_source);
+
+       switch(c2h->id) {
+       case C2H_8723B_BT_INFO:
+               if (c2h->bt_info.response_source >
+                   BT_INFO_SRC_8723B_BT_ACTIVE_SEND)
+                       dev_dbg(dev, "C2H_BT_INFO WiFi only firmware\n");
+               else
+                       dev_dbg(dev, "C2H_BT_INFO BT/WiFi coexist firmware\n");
+
+               if (c2h->bt_info.bt_has_reset)
+                       dev_dbg(dev, "BT has been reset\n");
+               if (c2h->bt_info.tx_rx_mask)
+                       dev_dbg(dev, "BT TRx mask\n");
+
+               break;
+       case C2H_8723B_BT_MP_INFO:
+               dev_dbg(dev, "C2H_MP_INFO ext ID %02x, status %02x\n",
+                       c2h->bt_mp_info.ext_id, c2h->bt_mp_info.status);
+               break;
+       case C2H_8723B_RA_REPORT:
+               dev_dbg(dev,
+                       "C2H RA RPT: rate %02x, unk %i, macid %02x, noise %i\n",
+                       c2h->ra_report.rate, c2h->ra_report.dummy0_0,
+                       c2h->ra_report.macid, c2h->ra_report.noisy_state);
+               break;
+       default:
+               dev_info(dev, "Unhandled C2H event %02x seq %02x\n",
+                        c2h->id, c2h->seq);
+               print_hex_dump(KERN_INFO, "C2H content: ", DUMP_PREFIX_NONE,
+                              16, 1, c2h->raw.payload, len, false);
+               break;
+       }
+}
+
+int rtl8xxxu_parse_rxdesc16(struct rtl8xxxu_priv *priv, struct sk_buff *skb)
 {
-       struct rtl8xxxu_rxdesc16 *rx_desc =
-               (struct rtl8xxxu_rxdesc16 *)skb->data;
+       struct ieee80211_hw *hw = priv->hw;
+       struct ieee80211_rx_status *rx_status;
+       struct rtl8xxxu_rxdesc16 *rx_desc;
        struct rtl8723au_phy_stats *phy_stats;
-       __le32 *_rx_desc_le = (__le32 *)skb->data;
-       u32 *_rx_desc = (u32 *)skb->data;
+       struct sk_buff *next_skb = NULL;
+       __le32 *_rx_desc_le;
+       u32 *_rx_desc;
        int drvinfo_sz, desc_shift;
-       int i;
+       int i, pkt_cnt, pkt_len, urb_len, pkt_offset;
 
-       for (i = 0; i < (sizeof(struct rtl8xxxu_rxdesc16) / sizeof(u32)); i++)
-               _rx_desc[i] = le32_to_cpu(_rx_desc_le[i]);
+       urb_len = skb->len;
+       pkt_cnt = 0;
 
-       skb_pull(skb, sizeof(struct rtl8xxxu_rxdesc16));
+       do {
+               rx_desc = (struct rtl8xxxu_rxdesc16 *)skb->data;
+               _rx_desc_le = (__le32 *)skb->data;
+               _rx_desc = (u32 *)skb->data;
 
-       phy_stats = (struct rtl8723au_phy_stats *)skb->data;
+               for (i = 0;
+                    i < (sizeof(struct rtl8xxxu_rxdesc16) / sizeof(u32)); i++)
+                       _rx_desc[i] = le32_to_cpu(_rx_desc_le[i]);
 
-       drvinfo_sz = rx_desc->drvinfo_sz * 8;
-       desc_shift = rx_desc->shift;
-       skb_pull(skb, drvinfo_sz + desc_shift);
+               /*
+                * Only read pkt_cnt from the header if we're parsing the
+                * first packet
+                */
+               if (!pkt_cnt)
+                       pkt_cnt = rx_desc->pkt_cnt;
+               pkt_len = rx_desc->pktlen;
 
-       if (rx_desc->phy_stats)
-               rtl8xxxu_rx_parse_phystats(priv, rx_status, phy_stats,
-                                          rx_desc->rxmcs);
+               drvinfo_sz = rx_desc->drvinfo_sz * 8;
+               desc_shift = rx_desc->shift;
+               pkt_offset = roundup(pkt_len + drvinfo_sz + desc_shift +
+                                    sizeof(struct rtl8xxxu_rxdesc16), 128);
 
-       rx_status->mactime = le32_to_cpu(rx_desc->tsfl);
-       rx_status->flag |= RX_FLAG_MACTIME_START;
+               if (pkt_cnt > 1)
+                       next_skb = skb_clone(skb, GFP_ATOMIC);
 
-       if (!rx_desc->swdec)
-               rx_status->flag |= RX_FLAG_DECRYPTED;
-       if (rx_desc->crc32)
-               rx_status->flag |= RX_FLAG_FAILED_FCS_CRC;
-       if (rx_desc->bw)
-               rx_status->flag |= RX_FLAG_40MHZ;
+               rx_status = IEEE80211_SKB_RXCB(skb);
+               memset(rx_status, 0, sizeof(struct ieee80211_rx_status));
 
-       if (rx_desc->rxht) {
-               rx_status->flag |= RX_FLAG_HT;
-               rx_status->rate_idx = rx_desc->rxmcs - DESC_RATE_MCS0;
-       } else {
-               rx_status->rate_idx = rx_desc->rxmcs;
-       }
+               skb_pull(skb, sizeof(struct rtl8xxxu_rxdesc16));
+
+               phy_stats = (struct rtl8723au_phy_stats *)skb->data;
+
+               skb_pull(skb, drvinfo_sz + desc_shift);
+
+               skb_trim(skb, pkt_len);
+
+               if (rx_desc->phy_stats)
+                       rtl8xxxu_rx_parse_phystats(priv, rx_status, phy_stats,
+                                                  rx_desc->rxmcs);
+
+               rx_status->mactime = le32_to_cpu(rx_desc->tsfl);
+               rx_status->flag |= RX_FLAG_MACTIME_START;
+
+               if (!rx_desc->swdec)
+                       rx_status->flag |= RX_FLAG_DECRYPTED;
+               if (rx_desc->crc32)
+                       rx_status->flag |= RX_FLAG_FAILED_FCS_CRC;
+               if (rx_desc->bw)
+                       rx_status->flag |= RX_FLAG_40MHZ;
+
+               if (rx_desc->rxht) {
+                       rx_status->flag |= RX_FLAG_HT;
+                       rx_status->rate_idx = rx_desc->rxmcs - DESC_RATE_MCS0;
+               } else {
+                       rx_status->rate_idx = rx_desc->rxmcs;
+               }
+
+               rx_status->freq = hw->conf.chandef.chan->center_freq;
+               rx_status->band = hw->conf.chandef.chan->band;
+
+               ieee80211_rx_irqsafe(hw, skb);
+
+               skb = next_skb;
+               if (skb)
+                       skb_pull(next_skb, pkt_offset);
+
+               pkt_cnt--;
+               urb_len -= pkt_offset;
+       } while (skb && urb_len > 0 && pkt_cnt > 0);
 
        return RX_TYPE_DATA_PKT;
 }
 
-int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb,
-                           struct ieee80211_rx_status *rx_status)
+int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb)
 {
+       struct ieee80211_hw *hw = priv->hw;
+       struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
        struct rtl8xxxu_rxdesc24 *rx_desc =
                (struct rtl8xxxu_rxdesc24 *)skb->data;
        struct rtl8723au_phy_stats *phy_stats;
@@ -5105,6 +5267,8 @@ int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb,
        for (i = 0; i < (sizeof(struct rtl8xxxu_rxdesc24) / sizeof(u32)); i++)
                _rx_desc[i] = le32_to_cpu(_rx_desc_le[i]);
 
+       memset(rx_status, 0, sizeof(struct ieee80211_rx_status));
+
        skb_pull(skb, sizeof(struct rtl8xxxu_rxdesc24));
 
        phy_stats = (struct rtl8723au_phy_stats *)skb->data;
@@ -5116,6 +5280,8 @@ int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb,
        if (rx_desc->rpt_sel) {
                struct device *dev = &priv->udev->dev;
                dev_dbg(dev, "%s: C2H packet\n", __func__);
+               rtl8723bu_handle_c2h(priv, skb);
+               dev_kfree_skb(skb);
                return RX_TYPE_C2H;
        }
 
@@ -5140,52 +5306,11 @@ int rtl8xxxu_parse_rxdesc24(struct rtl8xxxu_priv *priv, struct sk_buff *skb,
                rx_status->rate_idx = rx_desc->rxmcs;
        }
 
-       return RX_TYPE_DATA_PKT;
-}
-
-static void rtl8723bu_handle_c2h(struct rtl8xxxu_priv *priv,
-                                struct sk_buff *skb)
-{
-       struct rtl8723bu_c2h *c2h = (struct rtl8723bu_c2h *)skb->data;
-       struct device *dev = &priv->udev->dev;
-       int len;
-
-       len = skb->len - 2;
-
-       dev_dbg(dev, "C2H ID %02x seq %02x, len %02x source %02x\n",
-               c2h->id, c2h->seq, len, c2h->bt_info.response_source);
-
-       switch(c2h->id) {
-       case C2H_8723B_BT_INFO:
-               if (c2h->bt_info.response_source >
-                   BT_INFO_SRC_8723B_BT_ACTIVE_SEND)
-                       dev_dbg(dev, "C2H_BT_INFO WiFi only firmware\n");
-               else
-                       dev_dbg(dev, "C2H_BT_INFO BT/WiFi coexist firmware\n");
-
-               if (c2h->bt_info.bt_has_reset)
-                       dev_dbg(dev, "BT has been reset\n");
-               if (c2h->bt_info.tx_rx_mask)
-                       dev_dbg(dev, "BT TRx mask\n");
+       rx_status->freq = hw->conf.chandef.chan->center_freq;
+       rx_status->band = hw->conf.chandef.chan->band;
 
-               break;
-       case C2H_8723B_BT_MP_INFO:
-               dev_dbg(dev, "C2H_MP_INFO ext ID %02x, status %02x\n",
-                       c2h->bt_mp_info.ext_id, c2h->bt_mp_info.status);
-               break;
-       case C2H_8723B_RA_REPORT:
-               dev_dbg(dev,
-                       "C2H RA RPT: rate %02x, unk %i, macid %02x, noise %i\n",
-                       c2h->ra_report.rate, c2h->ra_report.dummy0_0,
-                       c2h->ra_report.macid, c2h->ra_report.noisy_state);
-               break;
-       default:
-               dev_info(dev, "Unhandled C2H event %02x seq %02x\n",
-                        c2h->id, c2h->seq);
-               print_hex_dump(KERN_INFO, "C2H content: ", DUMP_PREFIX_NONE,
-                              16, 1, c2h->raw.payload, len, false);
-               break;
-       }
+       ieee80211_rx_irqsafe(hw, skb);
+       return RX_TYPE_DATA_PKT;
 }
 
 static void rtl8xxxu_rx_complete(struct urb *urb)
@@ -5195,26 +5320,12 @@ static void rtl8xxxu_rx_complete(struct urb *urb)
        struct ieee80211_hw *hw = rx_urb->hw;
        struct rtl8xxxu_priv *priv = hw->priv;
        struct sk_buff *skb = (struct sk_buff *)urb->context;
-       struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
        struct device *dev = &priv->udev->dev;
-       int rx_type;
 
        skb_put(skb, urb->actual_length);
 
        if (urb->status == 0) {
-               memset(rx_status, 0, sizeof(struct ieee80211_rx_status));
-
-               rx_type = priv->fops->parse_rx_desc(priv, skb, rx_status);
-
-               rx_status->freq = hw->conf.chandef.chan->center_freq;
-               rx_status->band = hw->conf.chandef.chan->band;
-
-               if (rx_type == RX_TYPE_DATA_PKT)
-                       ieee80211_rx_irqsafe(hw, skb);
-               else {
-                       rtl8723bu_handle_c2h(priv, skb);
-                       dev_kfree_skb(skb);
-               }
+               priv->fops->parse_rx_desc(priv, skb);
 
                skb = NULL;
                rx_urb->urb.context = NULL;
@@ -5234,12 +5345,20 @@ cleanup:
 static int rtl8xxxu_submit_rx_urb(struct rtl8xxxu_priv *priv,
                                  struct rtl8xxxu_rx_urb *rx_urb)
 {
+       struct rtl8xxxu_fileops *fops = priv->fops;
        struct sk_buff *skb;
        int skb_size;
        int ret, rx_desc_sz;
 
-       rx_desc_sz = priv->fops->rx_desc_size;
-       skb_size = rx_desc_sz + RTL_RX_BUFFER_SIZE;
+       rx_desc_sz = fops->rx_desc_size;
+
+       if (priv->rx_buf_aggregation && fops->rx_agg_buf_size) {
+               skb_size = fops->rx_agg_buf_size;
+               skb_size += (rx_desc_sz + sizeof(struct rtl8723au_phy_stats));
+       } else {
+               skb_size = IEEE80211_MAX_FRAME_LEN;
+       }
+
        skb = __netdev_alloc_skb(NULL, skb_size, GFP_KERNEL);
        if (!skb)
                return -ENOMEM;
@@ -5267,7 +5386,7 @@ static void rtl8xxxu_int_complete(struct urb *urb)
                if (ret)
                        usb_unanchor_urb(urb);
        } else {
-               dev_info(dev, "%s: Error %i\n", __func__, urb->status);
+               dev_dbg(dev, "%s: Error %i\n", __func__, urb->status);
        }
 }
 
index b0e0c64..921c565 100644 (file)
 #define REG_DWBCN1_CTRL_8723B          0x0228
 
 /* 0x0280 ~ 0x02FF     RXDMA Configuration */
-#define REG_RXDMA_AGG_PG_TH            0x0280
+#define REG_RXDMA_AGG_PG_TH            0x0280  /* 0-7 : USB DMA size bits
+                                                  8-14: USB DMA timeout
+                                                  15  : Aggregation enable
+                                                        Only seems to be used
+                                                        on 8723bu/8192eu */
 #define  RXDMA_USB_AGG_ENABLE          BIT(31)
 #define REG_RXPKT_NUM                  0x0284
 #define  RXPKT_NUM_RXDMA_IDLE          BIT(17)
 #define  USB_HIMR_ROK                  BIT(0)  /*  Receive DMA OK Interrupt */
 
 #define REG_USB_SPECIAL_OPTION         0xfe55
+#define  USB_SPEC_USB_AGG_ENABLE       BIT(3)  /* Enable USB aggregation */
+#define  USB_SPEC_INT_BULK_SELECT      BIT(4)  /* Use interrupt endpoint to
+                                                  deliver interrupt packet.
+                                                  0: Use int, 1: use bulk */
 #define REG_USB_HRPWM                  0xfe58
 #define REG_USB_DMA_AGG_TO             0xfe5b
-#define REG_USB_AGG_TO                 0xfe5c
-#define REG_USB_AGG_TH                 0xfe5d
+#define REG_USB_AGG_TIMEOUT            0xfe5c
+#define REG_USB_AGG_THRESH             0xfe5d
 
 #define REG_NORMAL_SIE_VID             0xfe60  /* 0xfe60 - 0xfe61 */
 #define REG_NORMAL_SIE_PID             0xfe62  /* 0xfe62 - 0xfe63 */
index b660c21..91cc139 100644 (file)
@@ -901,7 +901,7 @@ void exhalbtc_stack_update_profile_info(void)
 {
 }
 
-void exhalbtc_update_min_bt_rssi(char bt_rssi)
+void exhalbtc_update_min_bt_rssi(s8 bt_rssi)
 {
        struct btc_coexist *btcoexist = &gl_bt_coexist;
 
index 3cbe34c..3d308eb 100644 (file)
@@ -433,7 +433,7 @@ struct btc_stack_info {
        u8 num_of_hid;
        bool pan_exist;
        bool unknown_acl_exist;
-       char min_bt_rssi;
+       s8 min_bt_rssi;
 };
 
 struct btc_statistics {
@@ -537,7 +537,7 @@ void exhalbtc_dbg_control(struct btc_coexist *btcoexist, u8 code, u8 len,
 void exhalbtc_stack_update_profile_info(void);
 void exhalbtc_set_hci_version(u16 hci_version);
 void exhalbtc_set_bt_patch_version(u16 bt_hci_version, u16 bt_patch_version);
-void exhalbtc_update_min_bt_rssi(char bt_rssi);
+void exhalbtc_update_min_bt_rssi(s8 bt_rssi);
 void exhalbtc_set_bt_exist(bool bt_exist);
 void exhalbtc_set_chip_type(u8 chip_type);
 void exhalbtc_set_ant_num(struct rtl_priv *rtlpriv, u8 type, u8 ant_num);
index 3a0faa8..41f77f8 100644 (file)
@@ -526,7 +526,7 @@ static void _rtl_add_wowlan_patterns(struct ieee80211_hw *hw,
                /* 3. calculate crc */
                rtl_pattern.crc = _calculate_wol_pattern_crc(content, len);
                RT_TRACE(rtlpriv, COMP_POWER, DBG_TRACE,
-                        "CRC_Remainder = 0x%x", rtl_pattern.crc);
+                        "CRC_Remainder = 0x%x\n", rtl_pattern.crc);
 
                /* 4. write crc & mask_for_hw to hw */
                rtlpriv->cfg->ops->add_wowlan_pattern(hw, &rtl_pattern, i);
index fd25aba..33905bb 100644 (file)
@@ -48,3 +48,28 @@ void rtl_dbgp_flag_init(struct ieee80211_hw *hw)
        /*Init Debug flag enable condition */
 }
 EXPORT_SYMBOL_GPL(rtl_dbgp_flag_init);
+
+#ifdef CONFIG_RTLWIFI_DEBUG
+void _rtl_dbg_trace(struct rtl_priv *rtlpriv, int comp, int level,
+                   const char *modname, const char *fmt, ...)
+{
+       if (unlikely((comp & rtlpriv->dbg.global_debugcomponents) &&
+                    (level <= rtlpriv->dbg.global_debuglevel))) {
+               struct va_format vaf;
+               va_list args;
+
+               va_start(args, fmt);
+
+               vaf.fmt = fmt;
+               vaf.va = &args;
+
+               printk(KERN_DEBUG "%s:%ps:<%lx-%x> %pV",
+                      modname, __builtin_return_address(0),
+                      in_interrupt(), in_atomic(),
+                      &vaf);
+
+               va_end(args);
+       }
+}
+EXPORT_SYMBOL_GPL(_rtl_dbg_trace);
+#endif
index fc794b3..6156a79 100644 (file)
@@ -174,15 +174,16 @@ do {                                                                      \
        }                                                               \
 } while (0)
 
+
+struct rtl_priv;
+
+__printf(5, 6)
+void _rtl_dbg_trace(struct rtl_priv *rtlpriv, int comp, int level,
+                   const char *modname, const char *fmt, ...);
+
 #define RT_TRACE(rtlpriv, comp, level, fmt, ...)                       \
-do {                                                                   \
-       if (unlikely(((comp) & rtlpriv->dbg.global_debugcomponents) &&  \
-                    ((level) <= rtlpriv->dbg.global_debuglevel))) {    \
-               printk(KERN_DEBUG KBUILD_MODNAME ":%s():<%lx-%x> " fmt, \
-                      __func__, in_interrupt(), in_atomic(),           \
-                      ##__VA_ARGS__);                                  \
-       }                                                               \
-} while (0)
+       _rtl_dbg_trace(rtlpriv, comp, level,                            \
+                      KBUILD_MODNAME, fmt, ##__VA_ARGS__)
 
 #define RTPRINT(rtlpriv, dbgtype, dbgflag, fmt, ...)                   \
 do {                                                                   \
index 0b4082c..7becfef 100644 (file)
@@ -24,6 +24,7 @@
  *****************************************************************************/
 #include "wifi.h"
 #include "efuse.h"
+#include "pci.h"
 #include <linux/export.h>
 
 static const u8 MAX_PGPKT_SIZE = 9;
@@ -1243,3 +1244,80 @@ static u8 efuse_calculate_word_cnts(u8 word_en)
        return word_cnts;
 }
 
+int rtl_get_hwinfo(struct ieee80211_hw *hw, struct rtl_priv *rtlpriv,
+                  int max_size, u8 *hwinfo, int *params)
+{
+       struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
+       struct rtl_pci_priv *rtlpcipriv = rtl_pcipriv(hw);
+       struct device *dev = &rtlpcipriv->dev.pdev->dev;
+       u16 eeprom_id;
+       u16 i, usvalue;
+
+       switch (rtlefuse->epromtype) {
+       case EEPROM_BOOT_EFUSE:
+               rtl_efuse_shadow_map_update(hw);
+               break;
+
+       case EEPROM_93C46:
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
+                        "RTL8XXX did not boot from eeprom, check it !!\n");
+               return 1;
+
+       default:
+               dev_warn(dev, "no efuse data\n");
+               return 1;
+       }
+
+       memcpy(hwinfo, &rtlefuse->efuse_map[EFUSE_INIT_MAP][0], max_size);
+
+       RT_PRINT_DATA(rtlpriv, COMP_INIT, DBG_DMESG, "MAP",
+                     hwinfo, max_size);
+
+       eeprom_id = *((u16 *)&hwinfo[0]);
+       if (eeprom_id != params[0]) {
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING,
+                        "EEPROM ID(%#x) is invalid!!\n", eeprom_id);
+               rtlefuse->autoload_failflag = true;
+       } else {
+               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "Autoload OK\n");
+               rtlefuse->autoload_failflag = false;
+       }
+
+       if (rtlefuse->autoload_failflag)
+               return 1;
+
+       rtlefuse->eeprom_vid = *(u16 *)&hwinfo[params[1]];
+       rtlefuse->eeprom_did = *(u16 *)&hwinfo[params[2]];
+       rtlefuse->eeprom_svid = *(u16 *)&hwinfo[params[3]];
+       rtlefuse->eeprom_smid = *(u16 *)&hwinfo[params[4]];
+       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
+                "EEPROMId = 0x%4x\n", eeprom_id);
+       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
+                "EEPROM VID = 0x%4x\n", rtlefuse->eeprom_vid);
+       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
+                "EEPROM DID = 0x%4x\n", rtlefuse->eeprom_did);
+       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
+                "EEPROM SVID = 0x%4x\n", rtlefuse->eeprom_svid);
+       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
+                "EEPROM SMID = 0x%4x\n", rtlefuse->eeprom_smid);
+
+       for (i = 0; i < 6; i += 2) {
+               usvalue = *(u16 *)&hwinfo[params[5] + i];
+               *((u16 *)(&rtlefuse->dev_addr[i])) = usvalue;
+       }
+       RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "%pM\n", rtlefuse->dev_addr);
+
+       rtlefuse->eeprom_channelplan = *&hwinfo[params[6]];
+       rtlefuse->eeprom_version = *(u16 *)&hwinfo[params[7]];
+       rtlefuse->txpwr_fromeprom = true;
+       rtlefuse->eeprom_oemid = *&hwinfo[params[8]];
+
+       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
+                "EEPROM Customer ID: 0x%2x\n", rtlefuse->eeprom_oemid);
+
+       /* set channel plan to world wide 13 */
+       rtlefuse->channel_plan = params[9];
+
+       return 0;
+}
+EXPORT_SYMBOL_GPL(rtl_get_hwinfo);
index be02e78..51aa121 100644 (file)
@@ -109,5 +109,7 @@ bool efuse_shadow_update_chk(struct ieee80211_hw *hw);
 void rtl_efuse_shadow_map_update(struct ieee80211_hw *hw);
 void efuse_force_write_vendor_Id(struct ieee80211_hw *hw);
 void efuse_re_pg_section(struct ieee80211_hw *hw, u8 section_idx);
+int rtl_get_hwinfo(struct ieee80211_hw *hw, struct rtl_priv *rtlpriv,
+                  int max_size, u8 *hwinfo, int *params);
 
 #endif
index 93579ca..9a64f9b 100644 (file)
@@ -76,9 +76,9 @@ bool rtl_ps_disable_nic(struct ieee80211_hw *hw)
 }
 EXPORT_SYMBOL(rtl_ps_disable_nic);
 
-bool rtl_ps_set_rf_state(struct ieee80211_hw *hw,
-                        enum rf_pwrstate state_toset,
-                        u32 changesource, bool protect_or_not)
+static bool rtl_ps_set_rf_state(struct ieee80211_hw *hw,
+                               enum rf_pwrstate state_toset,
+                               u32 changesource)
 {
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_ps_ctl *ppsc = rtl_psc(rtl_priv(hw));
@@ -86,9 +86,6 @@ bool rtl_ps_set_rf_state(struct ieee80211_hw *hw,
        bool actionallowed = false;
        u16 rfwait_cnt = 0;
 
-       if (protect_or_not)
-               goto no_protect;
-
        /*Only one thread can change
         *the RF state at one time, and others
         *should wait to be executed.
@@ -119,7 +116,6 @@ bool rtl_ps_set_rf_state(struct ieee80211_hw *hw,
                }
        }
 
-no_protect:
        rtstate = ppsc->rfpwr_state;
 
        switch (state_toset) {
@@ -162,15 +158,12 @@ no_protect:
        if (actionallowed)
                rtlpriv->cfg->ops->set_rf_power_state(hw, state_toset);
 
-       if (!protect_or_not) {
-               spin_lock(&rtlpriv->locks.rf_ps_lock);
-               ppsc->rfchange_inprogress = false;
-               spin_unlock(&rtlpriv->locks.rf_ps_lock);
-       }
+       spin_lock(&rtlpriv->locks.rf_ps_lock);
+       ppsc->rfchange_inprogress = false;
+       spin_unlock(&rtlpriv->locks.rf_ps_lock);
 
        return actionallowed;
 }
-EXPORT_SYMBOL(rtl_ps_set_rf_state);
 
 static void _rtl_ps_inactive_ps(struct ieee80211_hw *hw)
 {
@@ -191,7 +184,7 @@ static void _rtl_ps_inactive_ps(struct ieee80211_hw *hw)
        }
 
        rtl_ps_set_rf_state(hw, ppsc->inactive_pwrstate,
-                           RF_CHANGE_BY_IPS, false);
+                           RF_CHANGE_BY_IPS);
 
        if (ppsc->inactive_pwrstate == ERFOFF &&
            rtlhal->interface == INTF_PCI) {
@@ -587,7 +580,7 @@ void rtl_swlps_rf_awake(struct ieee80211_hw *hw)
        }
 
        spin_lock_irqsave(&rtlpriv->locks.lps_lock, flag);
-       rtl_ps_set_rf_state(hw, ERFON, RF_CHANGE_BY_PS, false);
+       rtl_ps_set_rf_state(hw, ERFON, RF_CHANGE_BY_PS);
        spin_unlock_irqrestore(&rtlpriv->locks.lps_lock, flag);
 }
 
@@ -630,7 +623,7 @@ void rtl_swlps_rf_sleep(struct ieee80211_hw *hw)
        spin_unlock(&rtlpriv->locks.rf_ps_lock);
 
        spin_lock_irqsave(&rtlpriv->locks.lps_lock, flag);
-       rtl_ps_set_rf_state(hw, ERFSLEEP, RF_CHANGE_BY_PS , false);
+       rtl_ps_set_rf_state(hw, ERFSLEEP, RF_CHANGE_BY_PS);
        spin_unlock_irqrestore(&rtlpriv->locks.lps_lock, flag);
 
        if (ppsc->reg_rfps_level & RT_RF_OFF_LEVL_ASPM &&
index 29dfc51..0df2b52 100644 (file)
@@ -28,9 +28,6 @@
 
 #define MAX_SW_LPS_SLEEP_INTV  5
 
-bool rtl_ps_set_rf_state(struct ieee80211_hw *hw,
-                        enum rf_pwrstate state_toset, u32 changesource,
-                        bool protect_or_not);
 bool rtl_ps_enable_nic(struct ieee80211_hw *hw);
 bool rtl_ps_disable_nic(struct ieee80211_hw *hw);
 void rtl_ips_nic_off(struct ieee80211_hw *hw);
index 1aca777..ce8621a 100644 (file)
@@ -94,7 +94,7 @@ static void _rtl_rc_rate_set_series(struct rtl_priv *rtlpriv,
                                    struct ieee80211_sta *sta,
                                    struct ieee80211_tx_rate *rate,
                                    struct ieee80211_tx_rate_control *txrc,
-                                   u8 tries, char rix, int rtsctsenable,
+                                   u8 tries, s8 rix, int rtsctsenable,
                                    bool not_data)
 {
        struct rtl_mac *mac = rtl_mac(rtlpriv);
index db9a782..f936a49 100644 (file)
@@ -886,7 +886,7 @@ static void dm_txpower_track_cb_therm(struct ieee80211_hw *hw)
        u8 thermalvalue_avg_count = 0;
        u32 thermalvalue_avg = 0;
        long  ele_d, temp_cck;
-       char ofdm_index[2], cck_index = 0,
+       s8 ofdm_index[2], cck_index = 0,
                ofdm_index_old[2] = {0, 0}, cck_index_old = 0;
        int i = 0;
        /*bool is2t = false;*/
@@ -898,7 +898,7 @@ static void dm_txpower_track_cb_therm(struct ieee80211_hw *hw)
        /*0.1 the following TWO tables decide the
         *final index of OFDM/CCK swing table
         */
-       char delta_swing_table_idx[2][15]  = {
+       s8 delta_swing_table_idx[2][15]  = {
                {0, 0, 2, 3, 4, 4, 5, 6, 7, 7, 8, 9, 10, 10, 11},
                {0, 0, -1, -2, -3, -4, -4, -4, -4, -5, -7, -8, -9, -9, -10}
        };
@@ -1790,6 +1790,7 @@ void rtl88e_dm_watchdog(struct ieee80211_hw *hw)
        if (ppsc->p2p_ps_info.p2p_ps_mode)
                fw_ps_awake = false;
 
+       spin_lock(&rtlpriv->locks.rf_ps_lock);
        if ((ppsc->rfpwr_state == ERFON) &&
            ((!fw_current_inpsmode) && fw_ps_awake) &&
            (!ppsc->rfchange_inprogress)) {
@@ -1802,4 +1803,5 @@ void rtl88e_dm_watchdog(struct ieee80211_hw *hw)
                rtl88e_dm_check_edca_turbo(hw);
                rtl88e_dm_antenna_diversity(hw);
        }
+       spin_unlock(&rtlpriv->locks.rf_ps_lock);
 }
index e26a233..4ab6201 100644 (file)
@@ -1835,76 +1835,24 @@ static void _rtl88ee_read_adapter_info(struct ieee80211_hw *hw)
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
        struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
-       u16 i, usvalue;
-       u8 hwinfo[HWSET_MAX_SIZE];
-       u16 eeprom_id;
-
-       switch (rtlefuse->epromtype) {
-       case EEPROM_BOOT_EFUSE:
-               rtl_efuse_shadow_map_update(hw);
-               break;
-
-       case EEPROM_93C46:
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "RTL819X Not boot from eeprom, check it !!");
+       int params[] = {RTL8188E_EEPROM_ID, EEPROM_VID, EEPROM_DID,
+                       EEPROM_SVID, EEPROM_SMID, EEPROM_MAC_ADDR,
+                       EEPROM_CHANNELPLAN, EEPROM_VERSION, EEPROM_CUSTOMER_ID,
+                       COUNTRY_CODE_WORLD_WIDE_13};
+       u8 *hwinfo;
+
+       hwinfo = kzalloc(HWSET_MAX_SIZE, GFP_KERNEL);
+       if (!hwinfo)
                return;
 
-       default:
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "boot from neither eeprom nor efuse, check it !!");
-               return;
-       }
-       memcpy(hwinfo, &rtlefuse->efuse_map[EFUSE_INIT_MAP][0], HWSET_MAX_SIZE);
-
-       RT_PRINT_DATA(rtlpriv, COMP_INIT, DBG_DMESG, "MAP\n",
-                     hwinfo, HWSET_MAX_SIZE);
-
-       eeprom_id = *((u16 *)&hwinfo[0]);
-       if (eeprom_id != RTL8188E_EEPROM_ID) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING,
-                        "EEPROM ID(%#x) is invalid!!\n", eeprom_id);
-               rtlefuse->autoload_failflag = true;
-       } else {
-               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "Autoload OK\n");
-               rtlefuse->autoload_failflag = false;
-       }
+       if (rtl_get_hwinfo(hw, rtlpriv, HWSET_MAX_SIZE, hwinfo, params))
+               goto exit;
 
-       if (rtlefuse->autoload_failflag == true)
-               return;
-       /*VID DID SVID SDID*/
-       rtlefuse->eeprom_vid = *(u16 *)&hwinfo[EEPROM_VID];
-       rtlefuse->eeprom_did = *(u16 *)&hwinfo[EEPROM_DID];
-       rtlefuse->eeprom_svid = *(u16 *)&hwinfo[EEPROM_SVID];
-       rtlefuse->eeprom_smid = *(u16 *)&hwinfo[EEPROM_SMID];
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROMId = 0x%4x\n", eeprom_id);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM VID = 0x%4x\n", rtlefuse->eeprom_vid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM DID = 0x%4x\n", rtlefuse->eeprom_did);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SVID = 0x%4x\n", rtlefuse->eeprom_svid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SMID = 0x%4x\n", rtlefuse->eeprom_smid);
-       /*customer ID*/
-       rtlefuse->eeprom_oemid = hwinfo[EEPROM_CUSTOMER_ID];
        if (rtlefuse->eeprom_oemid == 0xFF)
-               rtlefuse->eeprom_oemid =        0;
+               rtlefuse->eeprom_oemid = 0;
 
        RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
                 "EEPROM Customer ID: 0x%2x\n", rtlefuse->eeprom_oemid);
-       /*EEPROM version*/
-       rtlefuse->eeprom_version = *(u16 *)&hwinfo[EEPROM_VERSION];
-       /*mac address*/
-       for (i = 0; i < 6; i += 2) {
-               usvalue = *(u16 *)&hwinfo[EEPROM_MAC_ADDR + i];
-               *((u16 *)(&rtlefuse->dev_addr[i])) = usvalue;
-       }
-
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
-                "dev_addr: %pM\n", rtlefuse->dev_addr);
-       /*channel plan */
-       rtlefuse->eeprom_channelplan = hwinfo[EEPROM_CHANNELPLAN];
        /* set channel plan from efuse */
        rtlefuse->channel_plan = rtlefuse->eeprom_channelplan;
        /*tx power*/
@@ -1978,6 +1926,8 @@ static void _rtl88ee_read_adapter_info(struct ieee80211_hw *hw)
 
                }
        }
+exit:
+       kfree(hwinfo);
 }
 
 static void _rtl88ee_hal_customized_behavior(struct ieee80211_hw *hw)
index 416a9ba..7498a12 100644 (file)
@@ -373,7 +373,7 @@ static bool _rtl88e_phy_bb8188e_config_parafile(struct ieee80211_hw *hw)
 
        rtstatus = phy_config_bb_with_headerfile(hw, BASEBAND_CONFIG_PHY_REG);
        if (!rtstatus) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!\n");
                return false;
        }
 
@@ -383,7 +383,7 @@ static bool _rtl88e_phy_bb8188e_config_parafile(struct ieee80211_hw *hw)
                  phy_config_bb_with_pghdr(hw, BASEBAND_CONFIG_PHY_REG);
        }
        if (!rtstatus) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!\n");
                return false;
        }
        rtstatus =
@@ -1239,7 +1239,7 @@ u8 rtl88e_phy_sw_chnl(struct ieee80211_hw *hw)
        if (!(is_hal_stop(rtlhal)) && !(RT_CANNOT_IO(hw))) {
                rtl88e_phy_sw_chnl_callback(hw);
                RT_TRACE(rtlpriv, COMP_CHAN, DBG_LOUD,
-                        "sw_chnl_inprogress false schdule workitem current channel %d\n",
+                        "sw_chnl_inprogress false schedule workitem current channel %d\n",
                         rtlphy->current_channel);
                rtlphy->sw_chnl_inprogress = false;
        } else {
index 40893ce..26ac4c2 100644 (file)
@@ -498,7 +498,7 @@ static bool _rtl88e_phy_rf6052_config_parafile(struct ieee80211_hw *hw)
 
                if (rtstatus != true) {
                        RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE,
-                                "Radio[%d] Fail!!", rfpath);
+                                "Radio[%d] Fail!!\n", rfpath);
                        return false;
                }
 
index 1170106..3e3b886 100644 (file)
@@ -59,7 +59,7 @@ static void _rtl88ee_query_rxphystatus(struct ieee80211_hw *hw,
        struct phy_status_rpt *phystrpt =
                (struct phy_status_rpt *)p_drvinfo;
        struct rtl_dm *rtldm = rtl_dm(rtl_priv(hw));
-       char rx_pwr_all = 0, rx_pwr[4];
+       s8 rx_pwr_all = 0, rx_pwr[4];
        u8 rf_rx_num = 0, evm, pwdb_all;
        u8 i, max_spatial_stream;
        u32 rssi, total_rssi = 0;
@@ -540,7 +540,7 @@ void rtl88ee_tx_fill_desc(struct ieee80211_hw *hw,
                                 PCI_DMA_TODEVICE);
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, sizeof(struct tx_desc_88e));
@@ -703,7 +703,7 @@ void rtl88ee_tx_fill_cmddesc(struct ieee80211_hw *hw,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, TX_DESC_SIZE);
index 5a24d19..9a1c208 100644 (file)
@@ -593,8 +593,8 @@ struct rx_fwinfo_88e {
        u8 pwdb_all;
        u8 cfosho[4];
        u8 cfotail[4];
-       char rxevm[2];
-       char rxsnr[4];
+       s8 rxevm[2];
+       s8 rxsnr[4];
        u8 pdsnr[2];
        u8 csi_current[2];
        u8 csi_target[2];
index 4422e31..6a72d0c 100644 (file)
@@ -135,7 +135,7 @@ void rtl92c_dm_init_edca_turbo(struct ieee80211_hw *hw);
 void rtl92c_dm_check_txpower_tracking(struct ieee80211_hw *hw);
 void rtl92c_dm_init_rate_adaptive_mask(struct ieee80211_hw *hw);
 void rtl92c_dm_rf_saving(struct ieee80211_hw *hw, u8 bforce_in_normal);
-void rtl92c_phy_ap_calibrate(struct ieee80211_hw *hw, char delta);
+void rtl92c_phy_ap_calibrate(struct ieee80211_hw *hw, s8 delta);
 void rtl92c_phy_lc_calibrate(struct ieee80211_hw *hw);
 void rtl92c_phy_iq_calibrate(struct ieee80211_hw *hw, bool recovery);
 void rtl92c_dm_dynamic_txpower(struct ieee80211_hw *hw);
index 77e61b1..60ab2ec 100644 (file)
@@ -213,7 +213,7 @@ bool _rtl92c_phy_bb8192c_config_parafile(struct ieee80211_hw *hw)
        rtstatus = rtlpriv->cfg->ops->config_bb_with_headerfile(hw,
                                                 BASEBAND_CONFIG_PHY_REG);
        if (!rtstatus) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!\n");
                return false;
        }
        if (rtlphy->rf_type == RF_1T2R) {
@@ -226,7 +226,7 @@ bool _rtl92c_phy_bb8192c_config_parafile(struct ieee80211_hw *hw)
                                                   BASEBAND_CONFIG_PHY_REG);
        }
        if (!rtstatus) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!\n");
                return false;
        }
        rtstatus = rtlpriv->cfg->ops->config_bb_with_headerfile(hw,
@@ -757,7 +757,7 @@ u8 rtl92c_phy_sw_chnl(struct ieee80211_hw *hw)
        if (!(is_hal_stop(rtlhal)) && !(RT_CANNOT_IO(hw))) {
                rtl92c_phy_sw_chnl_callback(hw);
                RT_TRACE(rtlpriv, COMP_CHAN, DBG_LOUD,
-                        "sw_chnl_inprogress false schdule workitem\n");
+                        "sw_chnl_inprogress false schedule workitem\n");
                rtlphy->sw_chnl_inprogress = false;
        } else {
                RT_TRACE(rtlpriv, COMP_CHAN, DBG_LOUD,
@@ -1353,7 +1353,7 @@ static void _rtl92c_phy_iq_calibrate(struct ieee80211_hw *hw,
 }
 
 static void _rtl92c_phy_ap_calibrate(struct ieee80211_hw *hw,
-                                    char delta, bool is2t)
+                                    s8 delta, bool is2t)
 {
 }
 
@@ -1518,7 +1518,7 @@ void rtl92c_phy_lc_calibrate(struct ieee80211_hw *hw)
 }
 EXPORT_SYMBOL(rtl92c_phy_lc_calibrate);
 
-void rtl92c_phy_ap_calibrate(struct ieee80211_hw *hw, char delta)
+void rtl92c_phy_ap_calibrate(struct ieee80211_hw *hw, s8 delta)
 {
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_phy *rtlphy = &(rtlpriv->phy);
index 64bc49f..2024125 100644 (file)
@@ -210,7 +210,7 @@ u8 rtl92c_phy_sw_chnl(struct ieee80211_hw *hw);
 void rtl92c_phy_iq_calibrate(struct ieee80211_hw *hw, bool b_recovery);
 void rtl92c_phy_set_beacon_hw_reg(struct ieee80211_hw *hw,
                                         u16 beaconinterval);
-void rtl92c_phy_ap_calibrate(struct ieee80211_hw *hw, char delta);
+void rtl92c_phy_ap_calibrate(struct ieee80211_hw *hw, s8 delta);
 void rtl92c_phy_lc_calibrate(struct ieee80211_hw *hw);
 void rtl92c_phy_set_rfpath_switch(struct ieee80211_hw *hw, bool bmain);
 bool rtl92c_phy_config_rf_with_headerfile(struct ieee80211_hw *hw,
index 58b7ac6..2446079 100644 (file)
@@ -1680,65 +1680,18 @@ static void _rtl92ce_read_adapter_info(struct ieee80211_hw *hw)
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
        struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
-       struct device *dev = &rtl_pcipriv(hw)->dev.pdev->dev;
-       u16 i, usvalue;
-       u8 hwinfo[HWSET_MAX_SIZE];
-       u16 eeprom_id;
-
-       switch (rtlefuse->epromtype) {
-       case EEPROM_BOOT_EFUSE:
-               rtl_efuse_shadow_map_update(hw);
-               break;
-
-       case EEPROM_93C46:
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "RTL819X Not boot from eeprom, check it !!");
-               return;
-
-       default:
-               dev_warn(dev, "no efuse data\n");
-               return;
-       }
-
-       memcpy(hwinfo, &rtlefuse->efuse_map[EFUSE_INIT_MAP][0], HWSET_MAX_SIZE);
-
-       RT_PRINT_DATA(rtlpriv, COMP_INIT, DBG_DMESG, "MAP",
-                     hwinfo, HWSET_MAX_SIZE);
-
-       eeprom_id = *((u16 *)&hwinfo[0]);
-       if (eeprom_id != RTL8190_EEPROM_ID) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING,
-                        "EEPROM ID(%#x) is invalid!!\n", eeprom_id);
-               rtlefuse->autoload_failflag = true;
-       } else {
-               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "Autoload OK\n");
-               rtlefuse->autoload_failflag = false;
-       }
-
-       if (rtlefuse->autoload_failflag)
+       int params[] = {RTL8190_EEPROM_ID, EEPROM_VID, EEPROM_DID,
+                       EEPROM_SVID, EEPROM_SMID, EEPROM_MAC_ADDR,
+                       EEPROM_CHANNELPLAN, EEPROM_VERSION, EEPROM_CUSTOMER_ID,
+                       COUNTRY_CODE_WORLD_WIDE_13};
+       u8 *hwinfo;
+
+       hwinfo = kzalloc(HWSET_MAX_SIZE, GFP_KERNEL);
+       if (!hwinfo)
                return;
 
-       rtlefuse->eeprom_vid = *(u16 *)&hwinfo[EEPROM_VID];
-       rtlefuse->eeprom_did = *(u16 *)&hwinfo[EEPROM_DID];
-       rtlefuse->eeprom_svid = *(u16 *)&hwinfo[EEPROM_SVID];
-       rtlefuse->eeprom_smid = *(u16 *)&hwinfo[EEPROM_SMID];
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROMId = 0x%4x\n", eeprom_id);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM VID = 0x%4x\n", rtlefuse->eeprom_vid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM DID = 0x%4x\n", rtlefuse->eeprom_did);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SVID = 0x%4x\n", rtlefuse->eeprom_svid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SMID = 0x%4x\n", rtlefuse->eeprom_smid);
-
-       for (i = 0; i < 6; i += 2) {
-               usvalue = *(u16 *)&hwinfo[EEPROM_MAC_ADDR + i];
-               *((u16 *) (&rtlefuse->dev_addr[i])) = usvalue;
-       }
-
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "%pM\n", rtlefuse->dev_addr);
+       if (rtl_get_hwinfo(hw, rtlpriv, HWSET_MAX_SIZE, hwinfo, params))
+               goto exit;
 
        _rtl92ce_read_txpower_info_from_hwpg(hw,
                                             rtlefuse->autoload_failflag,
@@ -1747,18 +1700,6 @@ static void _rtl92ce_read_adapter_info(struct ieee80211_hw *hw)
        rtl8192ce_read_bt_coexist_info_from_hwpg(hw,
                                                 rtlefuse->autoload_failflag,
                                                 hwinfo);
-
-       rtlefuse->eeprom_channelplan = *&hwinfo[EEPROM_CHANNELPLAN];
-       rtlefuse->eeprom_version = *(u16 *)&hwinfo[EEPROM_VERSION];
-       rtlefuse->txpwr_fromeprom = true;
-       rtlefuse->eeprom_oemid = *&hwinfo[EEPROM_CUSTOMER_ID];
-
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM Customer ID: 0x%2x\n", rtlefuse->eeprom_oemid);
-
-       /* set channel paln to world wide 13 */
-       rtlefuse->channel_plan = COUNTRY_CODE_WORLD_WIDE_13;
-
        if (rtlhal->oem_id == RT_CID_DEFAULT) {
                switch (rtlefuse->eeprom_oemid) {
                case EEPROM_CID_DEFAULT:
@@ -1782,10 +1723,10 @@ static void _rtl92ce_read_adapter_info(struct ieee80211_hw *hw)
                default:
                        rtlhal->oem_id = RT_CID_DEFAULT;
                        break;
-
                }
        }
-
+exit:
+       kfree(hwinfo);
 }
 
 static void _rtl92ce_hal_customized_behavior(struct ieee80211_hw *hw)
index e5e1353..dadc02b 100644 (file)
@@ -102,7 +102,7 @@ void rtl92c_phy_sw_chnl_callback(struct ieee80211_hw *hw);
 u8 rtl92c_phy_sw_chnl(struct ieee80211_hw *hw);
 void rtl92c_phy_iq_calibrate(struct ieee80211_hw *hw, bool b_recovery);
 void rtl92c_phy_set_beacon_hw_reg(struct ieee80211_hw *hw, u16 beaconinterval);
-void rtl92c_phy_ap_calibrate(struct ieee80211_hw *hw, char delta);
+void rtl92c_phy_ap_calibrate(struct ieee80211_hw *hw, s8 delta);
 void rtl92c_phy_lc_calibrate(struct ieee80211_hw *hw);
 void _rtl92ce_phy_lc_calibrate(struct ieee80211_hw *hw, bool is2t);
 void rtl92c_phy_set_rfpath_switch(struct ieee80211_hw *hw, bool bmain);
index 84ddd4d..781af1b 100644 (file)
@@ -49,7 +49,7 @@ static u8 _rtl92ce_map_hwqueue_to_fwqueue(struct sk_buff *skb, u8 hw_queue)
        return skb->priority;
 }
 
-static u8 _rtl92c_query_rxpwrpercentage(char antpower)
+static u8 _rtl92c_query_rxpwrpercentage(s8 antpower)
 {
        if ((antpower <= -100) || (antpower >= 20))
                return 0;
@@ -59,9 +59,9 @@ static u8 _rtl92c_query_rxpwrpercentage(char antpower)
                return 100 + antpower;
 }
 
-static u8 _rtl92c_evm_db_to_percentage(char value)
+static u8 _rtl92c_evm_db_to_percentage(s8 value)
 {
-       char ret_val;
+       s8 ret_val;
        ret_val = value;
 
        if (ret_val >= 0)
@@ -449,7 +449,7 @@ void rtl92ce_tx_fill_desc(struct ieee80211_hw *hw,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        rcu_read_lock();
@@ -615,7 +615,7 @@ void rtl92ce_tx_fill_cmddesc(struct ieee80211_hw *hw,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, TX_DESC_SIZE);
index 4bec4b0..6073045 100644 (file)
@@ -537,8 +537,8 @@ struct rx_fwinfo_92c {
        u8 pwdb_all;
        u8 cfosho[4];
        u8 cfotail[4];
-       char rxevm[2];
-       char rxsnr[4];
+       s8 rxevm[2];
+       s8 rxsnr[4];
        u8 pdsnr[2];
        u8 csi_current[2];
        u8 csi_target[2];
index ae1129f..8789752 100644 (file)
@@ -347,56 +347,24 @@ static void _rtl92cu_read_adapter_info(struct ieee80211_hw *hw)
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
        struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
-       u16 i, usvalue;
-       u8 hwinfo[HWSET_MAX_SIZE] = {0};
-       u16 eeprom_id;
-
-       switch (rtlefuse->epromtype) {
-       case EEPROM_BOOT_EFUSE:
-               rtl_efuse_shadow_map_update(hw);
-               break;
-
-       case EEPROM_93C46:
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "RTL819X Not boot from eeprom, check it !!\n");
+       int params[] = {RTL8190_EEPROM_ID, EEPROM_VID, EEPROM_DID,
+                       EEPROM_SVID, EEPROM_SMID, EEPROM_MAC_ADDR,
+                       EEPROM_CHANNELPLAN, EEPROM_VERSION, EEPROM_CUSTOMER_ID,
+                       0};
+       u8 *hwinfo;
+
+       hwinfo = kzalloc(HWSET_MAX_SIZE, GFP_KERNEL);
+       if (!hwinfo)
                return;
 
-       default:
-               pr_warn("rtl92cu: no efuse data\n\n");
-               return;
-       }
-       memcpy(hwinfo, &rtlefuse->efuse_map[EFUSE_INIT_MAP][0], HWSET_MAX_SIZE);
-       RT_PRINT_DATA(rtlpriv, COMP_INIT, DBG_LOUD, "MAP",
-                     hwinfo, HWSET_MAX_SIZE);
-       eeprom_id = le16_to_cpu(*((__le16 *)&hwinfo[0]));
-       if (eeprom_id != RTL8190_EEPROM_ID) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "EEPROM ID(%#x) is invalid!!\n", eeprom_id);
-               rtlefuse->autoload_failflag = true;
-       } else {
-               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "Autoload OK\n");
-               rtlefuse->autoload_failflag = false;
-       }
-       if (rtlefuse->autoload_failflag)
-               return;
-       for (i = 0; i < 6; i += 2) {
-               usvalue = *(u16 *)&hwinfo[EEPROM_MAC_ADDR + i];
-               *((u16 *) (&rtlefuse->dev_addr[i])) = usvalue;
-       }
-       pr_info("MAC address: %pM\n", rtlefuse->dev_addr);
+       if (rtl_get_hwinfo(hw, rtlpriv, HWSET_MAX_SIZE, hwinfo, params))
+               goto exit;
+
        _rtl92cu_read_txpower_info_from_hwpg(hw,
                                           rtlefuse->autoload_failflag, hwinfo);
-       rtlefuse->eeprom_vid = le16_to_cpu(*(__le16 *)&hwinfo[EEPROM_VID]);
-       rtlefuse->eeprom_did = le16_to_cpu(*(__le16 *)&hwinfo[EEPROM_DID]);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, " VID = 0x%02x PID = 0x%02x\n",
-                rtlefuse->eeprom_vid, rtlefuse->eeprom_did);
-       rtlefuse->eeprom_channelplan = hwinfo[EEPROM_CHANNELPLAN];
-       rtlefuse->eeprom_version =
-                        le16_to_cpu(*(__le16 *)&hwinfo[EEPROM_VERSION]);
+       _rtl92cu_read_board_type(hw, hwinfo);
+
        rtlefuse->txpwr_fromeprom = true;
-       rtlefuse->eeprom_oemid = hwinfo[EEPROM_CUSTOMER_ID];
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "EEPROM Customer ID: 0x%2x\n",
-                rtlefuse->eeprom_oemid);
        if (rtlhal->oem_id == RT_CID_DEFAULT) {
                switch (rtlefuse->eeprom_oemid) {
                case EEPROM_CID_DEFAULT:
@@ -422,7 +390,8 @@ static void _rtl92cu_read_adapter_info(struct ieee80211_hw *hw)
                        break;
                }
        }
-       _rtl92cu_read_board_type(hw, hwinfo);
+exit:
+       kfree(hwinfo);
 }
 
 static void _rtl92cu_hal_customized_behavior(struct ieee80211_hw *hw)
index 0357133..68ca734 100644 (file)
@@ -596,7 +596,7 @@ void rtl92c_set_min_space(struct ieee80211_hw *hw, bool is2T)
 
 /*==============================================================*/
 
-static u8 _rtl92c_query_rxpwrpercentage(char antpower)
+static u8 _rtl92c_query_rxpwrpercentage(s8 antpower)
 {
        if ((antpower <= -100) || (antpower >= 20))
                return 0;
@@ -606,9 +606,9 @@ static u8 _rtl92c_query_rxpwrpercentage(char antpower)
                return 100 + antpower;
 }
 
-static u8 _rtl92c_evm_db_to_percentage(char value)
+static u8 _rtl92c_evm_db_to_percentage(s8 value)
 {
-       char ret_val;
+       s8 ret_val;
 
        ret_val = value;
        if (ret_val >= 0)
index 553a4bf..20a49ec 100644 (file)
@@ -79,8 +79,8 @@ struct rx_fwinfo_92c {
        u8 pwdb_all;
        u8 cfosho[4];
        u8 cfotail[4];
-       char rxevm[2];
-       char rxsnr[4];
+       s8 rxevm[2];
+       s8 rxsnr[4];
        u8 pdsnr[2];
        u8 csi_current[2];
        u8 csi_target[2];
index 5624ade..ec2ea56 100644 (file)
@@ -465,7 +465,7 @@ static bool _rtl92c_phy_rf6052_config_parafile(struct ieee80211_hw *hw)
                }
                if (!rtstatus) {
                        RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE,
-                                "Radio[%d] Fail!!", rfpath);
+                                "Radio[%d] Fail!!\n", rfpath);
                        goto phy_rf_cfg_fail;
                }
        }
index 8618c32..b0f6324 100644 (file)
@@ -1744,71 +1744,26 @@ static void _rtl92de_read_adapter_info(struct ieee80211_hw *hw)
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
        struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
-       struct device *dev = &rtl_pcipriv(hw)->dev.pdev->dev;
-       u16 i, usvalue;
-       u8 hwinfo[HWSET_MAX_SIZE];
-       u16 eeprom_id;
-       unsigned long flags;
+       int params[] = {RTL8190_EEPROM_ID, EEPROM_VID, EEPROM_DID,
+                       EEPROM_SVID, EEPROM_SMID, EEPROM_MAC_ADDR_MAC0_92D,
+                       EEPROM_CHANNEL_PLAN, EEPROM_VERSION, EEPROM_CUSTOMER_ID,
+                       COUNTRY_CODE_WORLD_WIDE_13};
+       int i;
+       u16 usvalue;
+       u8 *hwinfo;
 
-       switch (rtlefuse->epromtype) {
-       case EEPROM_BOOT_EFUSE:
-               spin_lock_irqsave(&globalmutex_for_power_and_efuse, flags);
-               rtl_efuse_shadow_map_update(hw);
-               _rtl92de_efuse_update_chip_version(hw);
-               spin_unlock_irqrestore(&globalmutex_for_power_and_efuse, flags);
-               break;
-       case EEPROM_93C46:
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "RTL819X Not boot from eeprom, check it !!\n");
-               return;
-       default:
-               dev_warn(dev, "no efuse data\n");
+       hwinfo = kzalloc(HWSET_MAX_SIZE, GFP_KERNEL);
+       if (!hwinfo)
                return;
-       }
-
-       memcpy(hwinfo, &rtlefuse->efuse_map[EFUSE_INIT_MAP][0], HWSET_MAX_SIZE);
-       RT_PRINT_DATA(rtlpriv, COMP_INIT, DBG_DMESG, "MAP",
-                     hwinfo, HWSET_MAX_SIZE);
 
-       eeprom_id = *((u16 *)&hwinfo[0]);
-       if (eeprom_id != RTL8190_EEPROM_ID) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING,
-                        "EEPROM ID(%#x) is invalid!!\n", eeprom_id);
-               rtlefuse->autoload_failflag = true;
-       } else {
-               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "Autoload OK\n");
-               rtlefuse->autoload_failflag = false;
-       }
-       if (rtlefuse->autoload_failflag) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "RTL819X Not boot from eeprom, check it !!\n");
+       if (rtl_get_hwinfo(hw, rtlpriv, HWSET_MAX_SIZE, hwinfo, params))
                return;
-       }
-       rtlefuse->eeprom_oemid = hwinfo[EEPROM_CUSTOMER_ID];
-       _rtl92de_read_macphymode_and_bandtype(hw, hwinfo);
 
-       /* VID, DID  SE     0xA-D */
-       rtlefuse->eeprom_vid = *(u16 *)&hwinfo[EEPROM_VID];
-       rtlefuse->eeprom_did = *(u16 *)&hwinfo[EEPROM_DID];
-       rtlefuse->eeprom_svid = *(u16 *)&hwinfo[EEPROM_SVID];
-       rtlefuse->eeprom_smid = *(u16 *)&hwinfo[EEPROM_SMID];
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "EEPROMId = 0x%4x\n", eeprom_id);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM VID = 0x%4x\n", rtlefuse->eeprom_vid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM DID = 0x%4x\n", rtlefuse->eeprom_did);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SVID = 0x%4x\n", rtlefuse->eeprom_svid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SMID = 0x%4x\n", rtlefuse->eeprom_smid);
+       _rtl92de_efuse_update_chip_version(hw);
+       _rtl92de_read_macphymode_and_bandtype(hw, hwinfo);
 
-       /* Read Permanent MAC address */
-       if (rtlhal->interfaceindex == 0) {
-               for (i = 0; i < 6; i += 2) {
-                       usvalue = *(u16 *)&hwinfo[EEPROM_MAC_ADDR_MAC0_92D + i];
-                       *((u16 *) (&rtlefuse->dev_addr[i])) = usvalue;
-               }
-       } else {
+       /* Read Permanent MAC address for 2nd interface */
+       if (rtlhal->interfaceindex != 0) {
                for (i = 0; i < 6; i += 2) {
                        usvalue = *(u16 *)&hwinfo[EEPROM_MAC_ADDR_MAC1_92D + i];
                        *((u16 *) (&rtlefuse->dev_addr[i])) = usvalue;
@@ -1834,10 +1789,8 @@ static void _rtl92de_read_adapter_info(struct ieee80211_hw *hw)
                rtlefuse->channel_plan = COUNTRY_CODE_FCC;
                break;
        }
-       rtlefuse->eeprom_version = *(u16 *)&hwinfo[EEPROM_VERSION];
        rtlefuse->txpwr_fromeprom = true;
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM Customer ID: 0x%2x\n", rtlefuse->eeprom_oemid);
+       kfree(hwinfo);
 }
 
 void rtl92de_read_eeprom_info(struct ieee80211_hw *hw)
index 7810fe8..d334d2a 100644 (file)
@@ -2695,7 +2695,7 @@ void rtl92d_phy_lc_calibrate(struct ieee80211_hw *hw)
        RTPRINT(rtlpriv, FINIT, INIT_IQK,  "LCK:Finish!!!\n");
 }
 
-void rtl92d_phy_ap_calibrate(struct ieee80211_hw *hw, char delta)
+void rtl92d_phy_ap_calibrate(struct ieee80211_hw *hw, s8 delta)
 {
        return;
 }
index 48d5c68..8115bf4 100644 (file)
@@ -160,7 +160,7 @@ void rtl92d_phy_config_maccoexist_rfpage(struct ieee80211_hw *hw);
 bool rtl92d_phy_check_poweroff(struct ieee80211_hw *hw);
 void rtl92d_phy_lc_calibrate(struct ieee80211_hw *hw);
 void rtl92d_update_bbrf_configuration(struct ieee80211_hw *hw);
-void rtl92d_phy_ap_calibrate(struct ieee80211_hw *hw, char delta);
+void rtl92d_phy_ap_calibrate(struct ieee80211_hw *hw, s8 delta);
 void rtl92d_phy_iq_calibrate(struct ieee80211_hw *hw);
 void rtl92d_phy_reset_iqk_result(struct ieee80211_hw *hw);
 void rtl92d_release_cckandrw_pagea_ctl(struct ieee80211_hw *hw,
index 6a6ac54..2f479d3 100644 (file)
@@ -601,7 +601,7 @@ bool rtl92d_phy_rf6052_config(struct ieee80211_hw *hw)
                }
                if (!rtstatus) {
                        RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE,
-                                "Radio[%d] Fail!!", rfpath);
+                                "Radio[%d] Fail!!\n", rfpath);
                        goto phy_rf_cfg_fail;
                }
 
index 1feaa62..e998e98 100644 (file)
@@ -48,7 +48,7 @@ static u8 _rtl92de_map_hwqueue_to_fwqueue(struct sk_buff *skb, u8 hw_queue)
        return skb->priority;
 }
 
-static u8 _rtl92d_query_rxpwrpercentage(char antpower)
+static u8 _rtl92d_query_rxpwrpercentage(s8 antpower)
 {
        if ((antpower <= -100) || (antpower >= 20))
                return 0;
@@ -58,9 +58,9 @@ static u8 _rtl92d_query_rxpwrpercentage(char antpower)
                return 100 + antpower;
 }
 
-static u8 _rtl92d_evm_db_to_percentage(char value)
+static u8 _rtl92d_evm_db_to_percentage(s8 value)
 {
-       char ret_val = value;
+       s8 ret_val = value;
 
        if (ret_val >= 0)
                ret_val = 0;
@@ -586,7 +586,7 @@ void rtl92de_tx_fill_desc(struct ieee80211_hw *hw,
                                 PCI_DMA_TODEVICE);
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, sizeof(struct tx_desc_92d));
@@ -744,7 +744,7 @@ void rtl92de_tx_fill_cmddesc(struct ieee80211_hw *hw,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, TX_DESC_SIZE);
index fb5cf06..194d99f 100644 (file)
@@ -554,8 +554,8 @@ struct rx_fwinfo_92d {
        u8 pwdb_all;
        u8 cfosho[4];
        u8 cfotail[4];
-       char rxevm[2];
-       char rxsnr[4];
+       s8 rxevm[2];
+       s8 rxsnr[4];
        u8 pdsnr[2];
        u8 csi_current[2];
        u8 csi_target[2];
index 459f3d0..e6b5786 100644 (file)
@@ -496,7 +496,7 @@ static void rtl92ee_dm_find_minimum_rssi(struct ieee80211_hw *hw)
                rtl_dm_dig->min_undec_pwdb_for_dm =
                        rtlpriv->dm.entry_min_undec_sm_pwdb;
                RT_TRACE(rtlpriv, COMP_BB_POWERSAVING, DBG_LOUD,
-                        "AP Ext Port or disconnet PWDB = 0x%x\n",
+                        "AP Ext Port or disconnect PWDB = 0x%x\n",
                         rtl_dm_dig->min_undec_pwdb_for_dm);
        }
        RT_TRACE(rtlpriv, COMP_DIG, DBG_LOUD,
@@ -983,7 +983,7 @@ static bool _rtl92ee_dm_ra_state_check(struct ieee80211_hw *hw,
                break;
        default:
                RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG,
-                        "wrong rssi level setting %d !", *ratr_state);
+                        "wrong rssi level setting %d !\n", *ratr_state);
                break;
        }
 
@@ -1219,6 +1219,7 @@ void rtl92ee_dm_watchdog(struct ieee80211_hw *hw)
        if (ppsc->p2p_ps_info.p2p_ps_mode)
                fw_ps_awake = false;
 
+       spin_lock(&rtlpriv->locks.rf_ps_lock);
        if ((ppsc->rfpwr_state == ERFON) &&
            ((!fw_current_inpsmode) && fw_ps_awake) &&
            (!ppsc->rfchange_inprogress)) {
@@ -1233,4 +1234,5 @@ void rtl92ee_dm_watchdog(struct ieee80211_hw *hw)
                rtl92ee_dm_dynamic_atc_switch(hw);
                rtl92ee_dm_dynamic_primary_cca_ckeck(hw);
        }
+       spin_unlock(&rtlpriv->locks.rf_ps_lock);
 }
index 28c260d..b07af8d 100644 (file)
@@ -2098,75 +2098,24 @@ static void _rtl92ee_read_adapter_info(struct ieee80211_hw *hw)
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
        struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
-       u16 i, usvalue;
-       u8 hwinfo[HWSET_MAX_SIZE];
-       u16 eeprom_id;
-
-       switch (rtlefuse->epromtype) {
-       case EEPROM_BOOT_EFUSE:
-               rtl_efuse_shadow_map_update(hw);
-               break;
-
-       case EEPROM_93C46:
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "RTL819X Not boot from eeprom, check it !!");
-               return;
-
-       default:
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "boot from neither eeprom nor efuse, check it !!");
+       int params[] = {RTL8192E_EEPROM_ID, EEPROM_VID, EEPROM_DID,
+                       EEPROM_SVID, EEPROM_SMID, EEPROM_MAC_ADDR,
+                       EEPROM_CHANNELPLAN, EEPROM_VERSION, EEPROM_CUSTOMER_ID,
+                       COUNTRY_CODE_WORLD_WIDE_13};
+       u8 *hwinfo;
+
+       hwinfo = kzalloc(HWSET_MAX_SIZE, GFP_KERNEL);
+       if (!hwinfo)
                return;
-       }
-       memcpy(hwinfo, &rtlefuse->efuse_map[EFUSE_INIT_MAP][0], HWSET_MAX_SIZE);
 
-       RT_PRINT_DATA(rtlpriv, COMP_INIT, DBG_DMESG, "MAP\n",
-                     hwinfo, HWSET_MAX_SIZE);
+       if (rtl_get_hwinfo(hw, rtlpriv, HWSET_MAX_SIZE, hwinfo, params))
+               goto exit;
 
-       eeprom_id = *((u16 *)&hwinfo[0]);
-       if (eeprom_id != RTL8192E_EEPROM_ID) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING,
-                        "EEPROM ID(%#x) is invalid!!\n", eeprom_id);
-               rtlefuse->autoload_failflag = true;
-       } else {
-               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "Autoload OK\n");
-               rtlefuse->autoload_failflag = false;
-       }
-
-       if (rtlefuse->autoload_failflag)
-               return;
-       /*VID DID SVID SDID*/
-       rtlefuse->eeprom_vid = *(u16 *)&hwinfo[EEPROM_VID];
-       rtlefuse->eeprom_did = *(u16 *)&hwinfo[EEPROM_DID];
-       rtlefuse->eeprom_svid = *(u16 *)&hwinfo[EEPROM_SVID];
-       rtlefuse->eeprom_smid = *(u16 *)&hwinfo[EEPROM_SMID];
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "EEPROMId = 0x%4x\n", eeprom_id);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM VID = 0x%4x\n", rtlefuse->eeprom_vid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM DID = 0x%4x\n", rtlefuse->eeprom_did);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SVID = 0x%4x\n", rtlefuse->eeprom_svid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SMID = 0x%4x\n", rtlefuse->eeprom_smid);
-       /*customer ID*/
-       rtlefuse->eeprom_oemid = *(u8 *)&hwinfo[EEPROM_CUSTOMER_ID];
        if (rtlefuse->eeprom_oemid == 0xFF)
                rtlefuse->eeprom_oemid = 0;
 
        RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
                 "EEPROM Customer ID: 0x%2x\n", rtlefuse->eeprom_oemid);
-       /*EEPROM version*/
-       rtlefuse->eeprom_version = *(u8 *)&hwinfo[EEPROM_VERSION];
-       /*mac address*/
-       for (i = 0; i < 6; i += 2) {
-               usvalue = *(u16 *)&hwinfo[EEPROM_MAC_ADDR + i];
-               *((u16 *)(&rtlefuse->dev_addr[i])) = usvalue;
-       }
-
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
-                "dev_addr: %pM\n", rtlefuse->dev_addr);
-       /*channel plan */
-       rtlefuse->eeprom_channelplan = *(u8 *)&hwinfo[EEPROM_CHANNELPLAN];
        /* set channel plan from efuse */
        rtlefuse->channel_plan = rtlefuse->eeprom_channelplan;
        /*tx power*/
@@ -2208,6 +2157,8 @@ static void _rtl92ee_read_adapter_info(struct ieee80211_hw *hw)
                        break;
                }
        }
+exit:
+       kfree(hwinfo);
 }
 
 static void _rtl92ee_hal_customized_behavior(struct ieee80211_hw *hw)
index c2bf8d1..beafc9a 100644 (file)
@@ -547,7 +547,7 @@ static void _rtl92ee_phy_store_txpower_by_rate_base(struct ieee80211_hw *hw)
 static void _phy_convert_txpower_dbm_to_relative_value(u32 *data, u8 start,
                                                       u8 end, u8 base)
 {
-       char i = 0;
+       s8 i = 0;
        u8 tmp = 0;
        u32 temp_data = 0;
 
@@ -650,7 +650,7 @@ static bool _rtl92ee_phy_bb8192ee_config_parafile(struct ieee80211_hw *hw)
 
        rtstatus = phy_config_bb_with_hdr_file(hw, BASEBAND_CONFIG_PHY_REG);
        if (!rtstatus) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!\n");
                return false;
        }
 
@@ -662,7 +662,7 @@ static bool _rtl92ee_phy_bb8192ee_config_parafile(struct ieee80211_hw *hw)
        }
        _rtl92ee_phy_txpower_by_rate_configuration(hw);
        if (!rtstatus) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!\n");
                return false;
        }
        rtstatus = phy_config_bb_with_hdr_file(hw, BASEBAND_CONFIG_AGC_TAB);
@@ -1189,7 +1189,7 @@ static u8 _rtl92ee_get_txpower_by_rate(struct ieee80211_hw *hw,
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_phy *rtlphy = &rtlpriv->phy;
        u8 shift = 0, sec, tx_num;
-       char diff = 0;
+       s8 diff = 0;
 
        sec = _rtl92ee_phy_get_ratesection_intxpower_byrate(rf, rate);
        tx_num = RF_TX_NUM_NONIMPLEMENT;
@@ -1265,14 +1265,14 @@ static u8 _rtl92ee_get_txpower_index(struct ieee80211_hw *hw,
                         "Illegal channel!!\n");
        }
 
-       if (IS_CCK_RATE(rate))
+       if (IS_CCK_RATE((s8)rate))
                tx_power = rtlefuse->txpwrlevel_cck[rfpath][index];
        else if (DESC92C_RATE6M <= rate)
                tx_power = rtlefuse->txpwrlevel_ht40_1s[rfpath][index];
 
        /* OFDM-1T*/
        if (DESC92C_RATE6M <= rate && rate <= DESC92C_RATE54M &&
-           !IS_CCK_RATE(rate))
+           !IS_CCK_RATE((s8)rate))
                tx_power += rtlefuse->txpwr_legacyhtdiff[rfpath][TX_1S];
 
        /* BW20-1S, BW20-2S */
@@ -1819,7 +1819,7 @@ u8 rtl92ee_phy_sw_chnl(struct ieee80211_hw *hw)
        if (!(is_hal_stop(rtlhal)) && !(RT_CANNOT_IO(hw))) {
                rtl92ee_phy_sw_chnl_callback(hw);
                RT_TRACE(rtlpriv, COMP_CHAN, DBG_LOUD,
-                        "sw_chnl_inprogress false schdule workitem current channel %d\n",
+                        "sw_chnl_inprogress false schedule workitem current channel %d\n",
                         rtlphy->current_channel);
                rtlphy->sw_chnl_inprogress = false;
        } else {
@@ -2969,7 +2969,7 @@ void rtl92ee_phy_lc_calibrate(struct ieee80211_hw *hw)
        rtlphy->lck_inprogress = false;
 }
 
-void rtl92ee_phy_ap_calibrate(struct ieee80211_hw *hw, char delta)
+void rtl92ee_phy_ap_calibrate(struct ieee80211_hw *hw, s8 delta)
 {
 }
 
index c6e97c8..49bd0e5 100644 (file)
@@ -141,7 +141,7 @@ void rtl92ee_phy_set_bw_mode(struct ieee80211_hw *hw,
 void rtl92ee_phy_sw_chnl_callback(struct ieee80211_hw *hw);
 u8 rtl92ee_phy_sw_chnl(struct ieee80211_hw *hw);
 void rtl92ee_phy_iq_calibrate(struct ieee80211_hw *hw, bool b_recovery);
-void rtl92ee_phy_ap_calibrate(struct ieee80211_hw *hw, char delta);
+void rtl92ee_phy_ap_calibrate(struct ieee80211_hw *hw, s8 delta);
 void rtl92ee_phy_lc_calibrate(struct ieee80211_hw *hw);
 void rtl92ee_phy_set_rfpath_switch(struct ieee80211_hw *hw, bool bmain);
 bool rtl92ee_phy_config_rf_with_headerfile(struct ieee80211_hw *hw,
index c9bc33c..73716c0 100644 (file)
@@ -142,7 +142,7 @@ static bool _rtl92ee_phy_rf6052_config_parafile(struct ieee80211_hw *hw)
 
                if (!rtstatus) {
                        RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE,
-                                "Radio[%d] Fail!!", rfpath);
+                                "Radio[%d] Fail!!\n", rfpath);
                        return false;
                }
        }
index 35e6bf7..2d48ccd 100644 (file)
@@ -56,7 +56,7 @@ static void _rtl92ee_query_rxphystatus(struct ieee80211_hw *hw,
 {
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct phy_status_rpt *p_phystrpt = (struct phy_status_rpt *)p_drvinfo;
-       char rx_pwr_all = 0, rx_pwr[4];
+       s8 rx_pwr_all = 0, rx_pwr[4];
        u8 rf_rx_num = 0, evm, pwdb_all;
        u8 i, max_spatial_stream;
        u32 rssi, total_rssi = 0;
@@ -703,7 +703,7 @@ void rtl92ee_tx_fill_desc(struct ieee80211_hw *hw,
                                 PCI_DMA_TODEVICE);
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
 
@@ -867,7 +867,7 @@ void rtl92ee_tx_fill_cmddesc(struct ieee80211_hw *hw,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, txdesc_len);
index a4c3834..8053d1b 100644 (file)
@@ -650,8 +650,8 @@ struct rx_fwinfo {
        u8 pwdb_all;
        u8 cfosho[4];
        u8 cfotail[4];
-       char rxevm[2];
-       char rxsnr[4];
+       s8 rxevm[2];
+       s8 rxsnr[4];
        u8 pdsnr[2];
        u8 csi_current[2];
        u8 csi_target[2];
index 442f2b6..ddfa0ae 100644 (file)
@@ -2003,7 +2003,7 @@ static void _rtl92se_read_adapter_info(struct ieee80211_hw *hw)
        rtlefuse->b1ss_support = rtlefuse->b1x1_recvcombine;
        rtlefuse->eeprom_oemid = *&hwinfo[EEPROM_CUSTOMID];
 
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "EEPROM Customer ID: 0x%2x",
+       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "EEPROM Customer ID: 0x%2x\n",
                 rtlefuse->eeprom_oemid);
 
        /* set channel paln to world wide 13 */
index 9475aa2..34e88a3 100644 (file)
@@ -137,7 +137,7 @@ static void _rtl92s_set_antennadiff(struct ieee80211_hw *hw,
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
        struct rtl_phy *rtlphy = &(rtlpriv->phy);
-       char ant_pwr_diff = 0;
+       s8 ant_pwr_diff = 0;
        u32     u4reg_val = 0;
 
        if (rtlphy->rf_type == RF_2T2R) {
index 125b29b..d53bbf6 100644 (file)
@@ -360,7 +360,7 @@ void rtl92se_tx_fill_desc(struct ieee80211_hw *hw,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        if (mac->opmode == NL80211_IFTYPE_STATION) {
@@ -529,7 +529,7 @@ void rtl92se_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        /* Clear all status     */
index 4c1c96c..42a6fba 100644 (file)
@@ -816,6 +816,7 @@ void rtl8723e_dm_watchdog(struct ieee80211_hw *hw)
        if (ppsc->p2p_ps_info.p2p_ps_mode)
                fw_ps_awake = false;
 
+       spin_lock(&rtlpriv->locks.rf_ps_lock);
        if ((ppsc->rfpwr_state == ERFON) &&
            ((!fw_current_inpsmode) && fw_ps_awake) &&
            (!ppsc->rfchange_inprogress)) {
@@ -829,6 +830,7 @@ void rtl8723e_dm_watchdog(struct ieee80211_hw *hw)
                rtl8723e_dm_bt_coexist(hw);
                rtl8723e_dm_check_edca_turbo(hw);
        }
+       spin_unlock(&rtlpriv->locks.rf_ps_lock);
        if (rtlpriv->btcoexist.init_set)
                rtl_write_byte(rtlpriv, 0x76e, 0xc);
 }
@@ -874,8 +876,8 @@ void rtl8723e_dm_bt_coexist(struct ieee80211_hw *hw)
 
        tmp_byte = rtl_read_byte(rtlpriv, 0x40);
        RT_TRACE(rtlpriv, COMP_BT_COEXIST, DBG_LOUD,
-                "[DM][BT], 0x40 is 0x%x", tmp_byte);
+                "[DM][BT], 0x40 is 0x%x\n", tmp_byte);
        RT_TRACE(rtlpriv, COMP_BT_COEXIST, DBG_DMESG,
-                "[DM][BT], bt_dm_coexist start");
+                "[DM][BT], bt_dm_coexist start\n");
        rtl8723e_dm_bt_coexist_8723(hw);
 }
index 44de695..ec9bcf3 100644 (file)
@@ -185,7 +185,7 @@ static void rtl8723e_dm_bt_set_hw_pta_mode(struct ieee80211_hw *hw, bool b_mode)
        struct rtl_priv *rtlpriv = rtl_priv(hw);
 
        if (BT_PTA_MODE_ON == b_mode) {
-               RT_TRACE(rtlpriv, COMP_BT_COEXIST, DBG_TRACE, "PTA mode on");
+               RT_TRACE(rtlpriv, COMP_BT_COEXIST, DBG_TRACE, "PTA mode on\n");
                /*  Enable GPIO 0/1/2/3/8 pins for bt */
                rtl_write_byte(rtlpriv, 0x40, 0x20);
                rtlpriv->btcoexist.hw_coexist_all_off = false;
@@ -1401,7 +1401,7 @@ static void rtl8723e_dm_bt_inq_page_monitor(struct ieee80211_hw *hw)
                        (long)hal_coex_8723.bt_inq_page_start_time) / HZ)
                        >= 10) {
                        RT_TRACE(rtlpriv, COMP_BT_COEXIST, DBG_DMESG,
-                               "[BTCoex], BT Inquiry/page >= 10sec!!!");
+                               "[BTCoex], BT Inquiry/page >= 10sec!!!\n");
                        hal_coex_8723.bt_inq_page_start_time = 0;
                        rtlpriv->btcoexist.cstate &=
                                ~BT_COEX_STATE_BT_INQ_PAGE;
index 57a1ba8..b88c7ee 100644 (file)
@@ -1630,67 +1630,22 @@ static void _rtl8723e_read_adapter_info(struct ieee80211_hw *hw,
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
        struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
-       struct device *dev = &rtl_pcipriv(hw)->dev.pdev->dev;
-       u16 i, usvalue;
-       u8 hwinfo[HWSET_MAX_SIZE];
-       u16 eeprom_id;
+       int params[] = {RTL8190_EEPROM_ID, EEPROM_VID, EEPROM_DID,
+                       EEPROM_SVID, EEPROM_SMID, EEPROM_MAC_ADDR,
+                       EEPROM_CHANNELPLAN, EEPROM_VERSION, EEPROM_CUSTOMER_ID,
+                       COUNTRY_CODE_WORLD_WIDE_13};
+       u8 *hwinfo;
 
        if (b_pseudo_test) {
                /* need add */
                return;
        }
-       switch (rtlefuse->epromtype) {
-       case EEPROM_BOOT_EFUSE:
-               rtl_efuse_shadow_map_update(hw);
-
-       case EEPROM_93C46:
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "RTL819X Not boot from eeprom, check it !!");
-               return;
-
-       default:
-               dev_warn(dev, "no efuse data\n");
-       }
-       memcpy(hwinfo, &rtlefuse->efuse_map[EFUSE_INIT_MAP][0], HWSET_MAX_SIZE);
-
-       RT_PRINT_DATA(rtlpriv, COMP_INIT, DBG_DMESG, "MAP\n",
-                     hwinfo, HWSET_MAX_SIZE);
-
-       eeprom_id = *((u16 *)&hwinfo[0]);
-       if (eeprom_id != RTL8190_EEPROM_ID) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING,
-                        "EEPROM ID(%#x) is invalid!!\n", eeprom_id);
-               rtlefuse->autoload_failflag = true;
-       } else {
-               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "Autoload OK\n");
-               rtlefuse->autoload_failflag = false;
-       }
-
-       if (rtlefuse->autoload_failflag)
+       hwinfo = kzalloc(HWSET_MAX_SIZE, GFP_KERNEL);
+       if (!hwinfo)
                return;
 
-       rtlefuse->eeprom_vid = *(u16 *)&hwinfo[EEPROM_VID];
-       rtlefuse->eeprom_did = *(u16 *)&hwinfo[EEPROM_DID];
-       rtlefuse->eeprom_svid = *(u16 *)&hwinfo[EEPROM_SVID];
-       rtlefuse->eeprom_smid = *(u16 *)&hwinfo[EEPROM_SMID];
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROMId = 0x%4x\n", eeprom_id);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM VID = 0x%4x\n", rtlefuse->eeprom_vid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM DID = 0x%4x\n", rtlefuse->eeprom_did);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SVID = 0x%4x\n", rtlefuse->eeprom_svid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SMID = 0x%4x\n", rtlefuse->eeprom_smid);
-
-       for (i = 0; i < 6; i += 2) {
-               usvalue = *(u16 *)&hwinfo[EEPROM_MAC_ADDR + i];
-               *((u16 *)(&rtlefuse->dev_addr[i])) = usvalue;
-       }
-
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
-                "dev_addr: %pM\n", rtlefuse->dev_addr);
+       if (rtl_get_hwinfo(hw, rtlpriv, HWSET_MAX_SIZE, hwinfo, params))
+               goto exit;
 
        _rtl8723e_read_txpower_info_from_hwpg(hw, rtlefuse->autoload_failflag,
                                              hwinfo);
@@ -1698,144 +1653,138 @@ static void _rtl8723e_read_adapter_info(struct ieee80211_hw *hw,
        rtl8723e_read_bt_coexist_info_from_hwpg(hw,
                        rtlefuse->autoload_failflag, hwinfo);
 
-       rtlefuse->eeprom_channelplan = hwinfo[EEPROM_CHANNELPLAN];
-       rtlefuse->eeprom_version = *(u16 *)&hwinfo[EEPROM_VERSION];
-       rtlefuse->txpwr_fromeprom = true;
-       rtlefuse->eeprom_oemid = hwinfo[EEPROM_CUSTOMER_ID];
-
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM Customer ID: 0x%2x\n", rtlefuse->eeprom_oemid);
-
-       /* set channel paln to world wide 13 */
-       rtlefuse->channel_plan = COUNTRY_CODE_WORLD_WIDE_13;
-
-       if (rtlhal->oem_id == RT_CID_DEFAULT) {
-               switch (rtlefuse->eeprom_oemid) {
-               case EEPROM_CID_DEFAULT:
-                       if (rtlefuse->eeprom_did == 0x8176) {
-                               if (CHK_SVID_SMID(0x10EC, 0x6151) ||
-                                   CHK_SVID_SMID(0x10EC, 0x6152) ||
-                                   CHK_SVID_SMID(0x10EC, 0x6154) ||
-                                   CHK_SVID_SMID(0x10EC, 0x6155) ||
-                                   CHK_SVID_SMID(0x10EC, 0x6177) ||
-                                   CHK_SVID_SMID(0x10EC, 0x6178) ||
-                                   CHK_SVID_SMID(0x10EC, 0x6179) ||
-                                   CHK_SVID_SMID(0x10EC, 0x6180) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7151) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7152) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7154) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7155) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7177) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7178) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7179) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7180) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8151) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8152) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8154) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8155) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8181) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8182) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8184) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8185) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9151) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9152) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9154) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9155) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9181) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9182) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9184) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9185))
+       if (rtlhal->oem_id != RT_CID_DEFAULT)
+               return;
+
+       switch (rtlefuse->eeprom_oemid) {
+       case EEPROM_CID_DEFAULT:
+               switch (rtlefuse->eeprom_did) {
+               case 0x8176:
+                       switch (rtlefuse->eeprom_svid) {
+                       case 0x10EC:
+                               switch (rtlefuse->eeprom_smid) {
+                               case 0x6151 ... 0x6152:
+                               case 0x6154 ... 0x6155:
+                               case 0x6177 ... 0x6180:
+                               case 0x7151 ... 0x7152:
+                               case 0x7154 ... 0x7155:
+                               case 0x7177 ... 0x7180:
+                               case 0x8151 ... 0x8152:
+                               case 0x8154 ... 0x8155:
+                               case 0x8181 ... 0x8182:
+                               case 0x8184 ... 0x8185:
+                               case 0x9151 ... 0x9152:
+                               case 0x9154 ... 0x9155:
+                               case 0x9181 ... 0x9182:
+                               case 0x9184 ... 0x9185:
                                        rtlhal->oem_id = RT_CID_TOSHIBA;
-                               else if (rtlefuse->eeprom_svid == 0x1025)
-                                       rtlhal->oem_id = RT_CID_819X_ACER;
-                               else if (CHK_SVID_SMID(0x10EC, 0x6191) ||
-                                        CHK_SVID_SMID(0x10EC, 0x6192) ||
-                                        CHK_SVID_SMID(0x10EC, 0x6193) ||
-                                        CHK_SVID_SMID(0x10EC, 0x7191) ||
-                                        CHK_SVID_SMID(0x10EC, 0x7192) ||
-                                        CHK_SVID_SMID(0x10EC, 0x7193) ||
-                                        CHK_SVID_SMID(0x10EC, 0x8191) ||
-                                        CHK_SVID_SMID(0x10EC, 0x8192) ||
-                                        CHK_SVID_SMID(0x10EC, 0x8193) ||
-                                        CHK_SVID_SMID(0x10EC, 0x9191) ||
-                                        CHK_SVID_SMID(0x10EC, 0x9192) ||
-                                        CHK_SVID_SMID(0x10EC, 0x9193))
+                                       break;
+                               case 0x6191 ... 0x6193:
+                               case 0x7191 ... 0x7193:
+                               case 0x8191 ... 0x8193:
+                               case 0x9191 ... 0x9193:
                                        rtlhal->oem_id = RT_CID_819X_SAMSUNG;
-                               else if (CHK_SVID_SMID(0x10EC, 0x8195) ||
-                                        CHK_SVID_SMID(0x10EC, 0x9195) ||
-                                        CHK_SVID_SMID(0x10EC, 0x7194) ||
-                                        CHK_SVID_SMID(0x10EC, 0x8200) ||
-                                        CHK_SVID_SMID(0x10EC, 0x8201) ||
-                                        CHK_SVID_SMID(0x10EC, 0x8202) ||
-                                        CHK_SVID_SMID(0x10EC, 0x9200))
-                                       rtlhal->oem_id = RT_CID_819X_LENOVO;
-                               else if (CHK_SVID_SMID(0x10EC, 0x8197) ||
-                                        CHK_SVID_SMID(0x10EC, 0x9196))
+                                       break;
+                               case 0x8197:
+                               case 0x9196:
                                        rtlhal->oem_id = RT_CID_819X_CLEVO;
-                               else if (CHK_SVID_SMID(0x1028, 0x8194) ||
-                                        CHK_SVID_SMID(0x1028, 0x8198) ||
-                                        CHK_SVID_SMID(0x1028, 0x9197) ||
-                                        CHK_SVID_SMID(0x1028, 0x9198))
+                                       break;
+                               case 0x8203:
+                                       rtlhal->oem_id = RT_CID_819X_PRONETS;
+                                       break;
+                               case 0x8195:
+                               case 0x9195:
+                               case 0x7194:
+                               case 0x8200 ... 0x8202:
+                               case 0x9200:
+                                       rtlhal->oem_id = RT_CID_819X_LENOVO;
+                                       break;
+                               }
+                       case 0x1025:
+                               rtlhal->oem_id = RT_CID_819X_ACER;
+                               break;
+                       case 0x1028:
+                               switch (rtlefuse->eeprom_smid) {
+                               case 0x8194:
+                               case 0x8198:
+                               case 0x9197 ... 0x9198:
                                        rtlhal->oem_id = RT_CID_819X_DELL;
-                               else if (CHK_SVID_SMID(0x103C, 0x1629))
+                                       break;
+                               }
+                               break;
+                       case 0x103C:
+                               switch (rtlefuse->eeprom_smid) {
+                               case 0x1629:
                                        rtlhal->oem_id = RT_CID_819X_HP;
-                               else if (CHK_SVID_SMID(0x1A32, 0x2315))
+                               }
+                               break;
+                       case 0x1A32:
+                               switch (rtlefuse->eeprom_smid) {
+                               case 0x2315:
                                        rtlhal->oem_id = RT_CID_819X_QMI;
-                               else if (CHK_SVID_SMID(0x10EC, 0x8203))
-                                       rtlhal->oem_id = RT_CID_819X_PRONETS;
-                               else if (CHK_SVID_SMID(0x1043, 0x84B5))
-                                       rtlhal->oem_id =
-                                                RT_CID_819X_EDIMAX_ASUS;
-                               else
-                                       rtlhal->oem_id = RT_CID_DEFAULT;
-                       } else if (rtlefuse->eeprom_did == 0x8178) {
-                               if (CHK_SVID_SMID(0x10EC, 0x6181) ||
-                                   CHK_SVID_SMID(0x10EC, 0x6182) ||
-                                   CHK_SVID_SMID(0x10EC, 0x6184) ||
-                                   CHK_SVID_SMID(0x10EC, 0x6185) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7181) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7182) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7184) ||
-                                   CHK_SVID_SMID(0x10EC, 0x7185) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8181) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8182) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8184) ||
-                                   CHK_SVID_SMID(0x10EC, 0x8185) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9181) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9182) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9184) ||
-                                   CHK_SVID_SMID(0x10EC, 0x9185))
-                                       rtlhal->oem_id = RT_CID_TOSHIBA;
-                               else if (rtlefuse->eeprom_svid == 0x1025)
-                                       rtlhal->oem_id = RT_CID_819X_ACER;
-                               else if (CHK_SVID_SMID(0x10EC, 0x8186))
-                                       rtlhal->oem_id = RT_CID_819X_PRONETS;
-                               else if (CHK_SVID_SMID(0x1043, 0x8486))
+                                       break;
+                               }
+                               break;
+                       case 0x1043:
+                               switch (rtlefuse->eeprom_smid) {
+                               case 0x84B5:
                                        rtlhal->oem_id =
-                                                    RT_CID_819X_EDIMAX_ASUS;
-                               else
-                                       rtlhal->oem_id = RT_CID_DEFAULT;
-                       } else {
-                               rtlhal->oem_id = RT_CID_DEFAULT;
+                                               RT_CID_819X_EDIMAX_ASUS;
+                               }
+                               break;
                        }
                        break;
-               case EEPROM_CID_TOSHIBA:
-                       rtlhal->oem_id = RT_CID_TOSHIBA;
-                       break;
-               case EEPROM_CID_CCX:
-                       rtlhal->oem_id = RT_CID_CCX;
-                       break;
-               case EEPROM_CID_QMI:
-                       rtlhal->oem_id = RT_CID_819X_QMI;
-                       break;
-               case EEPROM_CID_WHQL:
+               case 0x8178:
+                       switch (rtlefuse->eeprom_svid) {
+                       case 0x10ec:
+                               switch (rtlefuse->eeprom_smid) {
+                               case 0x6181 ... 0x6182:
+                               case 0x6184 ... 0x6185:
+                               case 0x7181 ... 0x7182:
+                               case 0x7184 ... 0x7185:
+                               case 0x8181 ... 0x8182:
+                               case 0x8184 ... 0x8185:
+                               case 0x9181 ... 0x9182:
+                               case 0x9184 ... 0x9185:
+                                       rtlhal->oem_id = RT_CID_TOSHIBA;
+                                       break;
+                               case 0x8186:
+                                       rtlhal->oem_id =
+                                               RT_CID_819X_PRONETS;
+                                       break;
+                               }
                                break;
-               default:
-                       rtlhal->oem_id = RT_CID_DEFAULT;
+                       case 0x1025:
+                               rtlhal->oem_id = RT_CID_819X_ACER;
+                               break;
+                       case 0x1043:
+                               switch (rtlefuse->eeprom_smid) {
+                               case 0x8486:
+                                       rtlhal->oem_id =
+                                            RT_CID_819X_EDIMAX_ASUS;
+                               }
+                               break;
+                       }
                        break;
-
                }
+               break;
+       case EEPROM_CID_TOSHIBA:
+               rtlhal->oem_id = RT_CID_TOSHIBA;
+               break;
+       case EEPROM_CID_CCX:
+               rtlhal->oem_id = RT_CID_CCX;
+               break;
+       case EEPROM_CID_QMI:
+               rtlhal->oem_id = RT_CID_819X_QMI;
+               break;
+       case EEPROM_CID_WHQL:
+               break;
+       default:
+               rtlhal->oem_id = RT_CID_DEFAULT;
+               break;
        }
+exit:
+       kfree(hwinfo);
 }
 
 static void _rtl8723e_hal_customized_behavior(struct ieee80211_hw *hw)
index d367097..601b78e 100644 (file)
@@ -213,7 +213,7 @@ static bool _rtl8723e_phy_bb8192c_config_parafile(struct ieee80211_hw *hw)
        rtstatus = _rtl8723e_phy_config_bb_with_headerfile(hw,
                                                BASEBAND_CONFIG_PHY_REG);
        if (rtstatus != true) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!\n");
                return false;
        }
 
@@ -227,7 +227,7 @@ static bool _rtl8723e_phy_bb8192c_config_parafile(struct ieee80211_hw *hw)
                                        BASEBAND_CONFIG_PHY_REG);
        }
        if (rtstatus != true) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!\n");
                return false;
        }
        rtstatus =
@@ -893,7 +893,7 @@ u8 rtl8723e_phy_sw_chnl(struct ieee80211_hw *hw)
        if (!(is_hal_stop(rtlhal)) && !(RT_CANNOT_IO(hw))) {
                rtl8723e_phy_sw_chnl_callback(hw);
                RT_TRACE(rtlpriv, COMP_CHAN, DBG_LOUD,
-                        "sw_chnl_inprogress false schdule workitem\n");
+                        "sw_chnl_inprogress false schedule workitem\n");
                rtlphy->sw_chnl_inprogress = false;
        } else {
                RT_TRACE(rtlpriv, COMP_CHAN, DBG_LOUD,
index 9ebc828..4227717 100644 (file)
@@ -504,7 +504,7 @@ static bool _rtl8723e_phy_rf6052_config_parafile(struct ieee80211_hw *hw)
 
                if (rtstatus != true) {
                        RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE,
-                                "Radio[%d] Fail!!", rfpath);
+                                "Radio[%d] Fail!!\n", rfpath);
                        return false;
                }
        }
index 7b4a9b6..e93125e 100644 (file)
@@ -389,7 +389,7 @@ void rtl8723e_tx_fill_desc(struct ieee80211_hw *hw,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        if (mac->opmode == NL80211_IFTYPE_STATION) {
@@ -557,7 +557,7 @@ void rtl8723e_tx_fill_cmddesc(struct ieee80211_hw *hw,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, TX_DESC_SIZE);
index 32970bf..43d4c79 100644 (file)
@@ -522,8 +522,8 @@ struct rx_fwinfo_8723e {
        u8 pwdb_all;
        u8 cfosho[4];
        u8 cfotail[4];
-       char rxevm[2];
-       char rxsnr[4];
+       s8 rxevm[2];
+       s8 rxsnr[4];
        u8 pdsnr[2];
        u8 csi_current[2];
        u8 csi_target[2];
index 3a81cdb..131c0d1 100644 (file)
@@ -758,11 +758,11 @@ static void rtl8723be_dm_txpower_tracking_callback_thermalmeter(
        u8 ofdm_min_index = 6;
        u8 index_for_channel = 0;
 
-       char delta_swing_table_idx_tup_a[TXSCALE_TABLE_SIZE] = {
+       s8 delta_swing_table_idx_tup_a[TXSCALE_TABLE_SIZE] = {
                0, 0, 1, 2, 2, 2, 3, 3, 3, 4,  5,
                5, 6, 6, 7, 7, 8, 8, 9, 9, 9, 10,
                10, 11, 11, 12, 12, 13, 14, 15};
-       char delta_swing_table_idx_tdown_a[TXSCALE_TABLE_SIZE] = {
+       s8 delta_swing_table_idx_tdown_a[TXSCALE_TABLE_SIZE] = {
                0, 0, 1, 2, 2, 2, 3, 3, 3, 4,  5,
                5, 6, 6, 6, 6, 7, 7, 7, 8, 8,  9,
                9, 10, 10, 11, 12, 13, 14, 15};
@@ -1279,6 +1279,7 @@ void rtl8723be_dm_watchdog(struct ieee80211_hw *hw)
        if (ppsc->p2p_ps_info.p2p_ps_mode)
                fw_ps_awake = false;
 
+       spin_lock(&rtlpriv->locks.rf_ps_lock);
        if ((ppsc->rfpwr_state == ERFON) &&
                ((!fw_current_inpsmode) && fw_ps_awake) &&
                (!ppsc->rfchange_inprogress)) {
@@ -1294,5 +1295,6 @@ void rtl8723be_dm_watchdog(struct ieee80211_hw *hw)
                rtl8723be_dm_check_txpower_tracking(hw);
                rtl8723be_dm_dynamic_txpower(hw);
        }
+       spin_unlock(&rtlpriv->locks.rf_ps_lock);
        rtlpriv->dm.dbginfo.num_qry_beacon_pkt = 0;
 }
index 08288ac..82e4476 100644 (file)
@@ -1474,7 +1474,7 @@ static enum version_8723e _rtl8723be_read_chip_version(struct ieee80211_hw *hw)
 
        value32 = rtl_read_dword(rtlpriv, REG_SYS_CFG1);
        if ((value32 & (CHIP_8723B)) != CHIP_8723B)
-               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "unkown chip version\n");
+               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "unknown chip version\n");
        else
                version = (enum version_8723e)CHIP_8723B;
 
@@ -2026,10 +2026,12 @@ static void _rtl8723be_read_adapter_info(struct ieee80211_hw *hw,
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
        struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
-       struct device *dev = &rtl_pcipriv(hw)->dev.pdev->dev;
-       u16 i, usvalue;
-       u8 hwinfo[HWSET_MAX_SIZE];
-       u16 eeprom_id;
+       int params[] = {RTL8723BE_EEPROM_ID, EEPROM_VID, EEPROM_DID,
+                       EEPROM_SVID, EEPROM_SMID, EEPROM_MAC_ADDR,
+                       EEPROM_CHANNELPLAN, EEPROM_VERSION, EEPROM_CUSTOMER_ID,
+                       COUNTRY_CODE_WORLD_WIDE_13};
+       u8 *hwinfo;
+       int i;
        bool is_toshiba_smid1 = false;
        bool is_toshiba_smid2 = false;
        bool is_samsung_smid = false;
@@ -2057,58 +2059,12 @@ static void _rtl8723be_read_adapter_info(struct ieee80211_hw *hw,
                return;
        }
 
-       switch (rtlefuse->epromtype) {
-       case EEPROM_BOOT_EFUSE:
-               rtl_efuse_shadow_map_update(hw);
-               break;
-
-       case EEPROM_93C46:
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "RTL819X Not boot from eeprom, check it !!");
+       hwinfo = kzalloc(HWSET_MAX_SIZE, GFP_KERNEL);
+       if (!hwinfo)
                return;
 
-       default:
-               dev_warn(dev, "no efuse data\n");
-               return;
-       }
-       memcpy(hwinfo, &rtlefuse->efuse_map[EFUSE_INIT_MAP][0], HWSET_MAX_SIZE);
-       RT_PRINT_DATA(rtlpriv, COMP_INIT, DBG_DMESG, ("MAP\n"),
-                     hwinfo, HWSET_MAX_SIZE);
-
-       eeprom_id = *((u16 *)&hwinfo[0]);
-       if (eeprom_id != RTL8723BE_EEPROM_ID) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING,
-                        "EEPROM ID(%#x) is invalid!!\n", eeprom_id);
-               rtlefuse->autoload_failflag = true;
-       } else {
-               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "Autoload OK\n");
-               rtlefuse->autoload_failflag = false;
-       }
-
-       if (rtlefuse->autoload_failflag)
-               return;
-
-       rtlefuse->eeprom_vid = *(u16 *)&hwinfo[EEPROM_VID];
-       rtlefuse->eeprom_did = *(u16 *)&hwinfo[EEPROM_DID];
-       rtlefuse->eeprom_svid = *(u16 *)&hwinfo[EEPROM_SVID];
-       rtlefuse->eeprom_smid = *(u16 *)&hwinfo[EEPROM_SMID];
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROMId = 0x%4x\n", eeprom_id);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM VID = 0x%4x\n", rtlefuse->eeprom_vid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM DID = 0x%4x\n", rtlefuse->eeprom_did);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SVID = 0x%4x\n", rtlefuse->eeprom_svid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SMID = 0x%4x\n", rtlefuse->eeprom_smid);
-
-       for (i = 0; i < 6; i += 2) {
-               usvalue = *(u16 *)&hwinfo[EEPROM_MAC_ADDR + i];
-               *((u16 *)(&rtlefuse->dev_addr[i])) = usvalue;
-       }
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG, "dev_addr: %pM\n",
-                rtlefuse->dev_addr);
+       if (rtl_get_hwinfo(hw, rtlpriv, HWSET_MAX_SIZE, hwinfo, params))
+               goto exit;
 
        /*parse xtal*/
        rtlefuse->crystalcap = hwinfo[EEPROM_XTAL_8723BE];
@@ -2122,14 +2078,6 @@ static void _rtl8723be_read_adapter_info(struct ieee80211_hw *hw,
                                                 rtlefuse->autoload_failflag,
                                                 hwinfo);
 
-       rtlefuse->eeprom_channelplan = hwinfo[EEPROM_CHANNELPLAN];
-       rtlefuse->eeprom_version = *(u16 *)&hwinfo[EEPROM_VERSION];
-       rtlefuse->txpwr_fromeprom = true;
-       rtlefuse->eeprom_oemid = hwinfo[EEPROM_CUSTOMER_ID];
-
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM Customer ID: 0x%2x\n", rtlefuse->eeprom_oemid);
-
        /* set channel plan from efuse */
        rtlefuse->channel_plan = rtlefuse->eeprom_channelplan;
 
@@ -2240,6 +2188,8 @@ static void _rtl8723be_read_adapter_info(struct ieee80211_hw *hw,
                        break;
                }
        }
+exit:
+       kfree(hwinfo);
 }
 
 static void _rtl8723be_hal_customized_behavior(struct ieee80211_hw *hw)
index c5ca9df..285818d 100644 (file)
@@ -379,7 +379,7 @@ static void _rtl8723be_phy_store_txpower_by_rate_base(struct ieee80211_hw *hw)
 static void _phy_convert_txpower_dbm_to_relative_value(u32 *data, u8 start,
                                                u8 end, u8 base_val)
 {
-       char i = 0;
+       s8 i = 0;
        u8 temp_value = 0;
        u32 temp_data = 0;
 
@@ -467,7 +467,7 @@ static bool _rtl8723be_phy_bb8723b_config_parafile(struct ieee80211_hw *hw)
        rtstatus = _rtl8723be_phy_config_bb_with_headerfile(hw,
                                                BASEBAND_CONFIG_PHY_REG);
        if (!rtstatus) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!\n");
                return false;
        }
        _rtl8723be_phy_init_tx_power_by_rate(hw);
@@ -478,7 +478,7 @@ static bool _rtl8723be_phy_bb8723b_config_parafile(struct ieee80211_hw *hw)
        }
        phy_txpower_by_rate_config(hw);
        if (!rtstatus) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!\n");
                return false;
        }
        rtstatus = _rtl8723be_phy_config_bb_with_headerfile(hw,
@@ -953,7 +953,7 @@ static u8 _rtl8723be_get_txpower_by_rate(struct ieee80211_hw *hw,
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_phy *rtlphy = &rtlpriv->phy;
        u8 shift = 0, rate_section, tx_num;
-       char tx_pwr_diff = 0;
+       s8 tx_pwr_diff = 0;
 
        rate_section = _rtl8723be_phy_get_ratesection_intxpower_byrate(rfpath,
                                                                       rate);
@@ -1395,7 +1395,7 @@ u8 rtl8723be_phy_sw_chnl(struct ieee80211_hw *hw)
        if (!(is_hal_stop(rtlhal)) && !(RT_CANNOT_IO(hw))) {
                rtl8723be_phy_sw_chnl_callback(hw);
                RT_TRACE(rtlpriv, COMP_CHAN, DBG_LOUD,
-                        "sw_chnl_inprogress false schdule workitem current channel %d\n",
+                        "sw_chnl_inprogress false schedule workitem current channel %d\n",
                         rtlphy->current_channel);
                rtlphy->sw_chnl_inprogress = false;
        } else {
index 97f5a03..78f4f18 100644 (file)
@@ -502,7 +502,7 @@ static bool _rtl8723be_phy_rf6052_config_parafile(struct ieee80211_hw *hw)
 
                if (!rtstatus) {
                        RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE,
-                                "Radio[%d] Fail!!", rfpath);
+                                "Radio[%d] Fail!!\n", rfpath);
                        return false;
                }
        }
index 6034597..2175aec 100644 (file)
@@ -56,7 +56,7 @@ static void _rtl8723be_query_rxphystatus(struct ieee80211_hw *hw,
 {
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct phy_status_rpt *p_phystrpt = (struct phy_status_rpt *)p_drvinfo;
-       char rx_pwr_all = 0, rx_pwr[4];
+       s8 rx_pwr_all = 0, rx_pwr[4];
        u8 rf_rx_num = 0, evm, pwdb_all, pwdb_all_bt = 0;
        u8 i, max_spatial_stream;
        u32 rssi, total_rssi = 0;
@@ -464,7 +464,7 @@ void rtl8723be_tx_fill_desc(struct ieee80211_hw *hw,
        mapping = pci_map_single(rtlpci->pdev, skb->data, skb->len,
                                 PCI_DMA_TODEVICE);
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
-               RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE, "DMA mapping error");
+               RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE, "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, sizeof(struct tx_desc_8723be));
@@ -616,7 +616,7 @@ void rtl8723be_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, TX_DESC_SIZE);
index 40c3660..8a9fe41 100644 (file)
@@ -385,9 +385,9 @@ struct phy_status_rpt {
        u8 cck_rpt_b_ofdm_cfosho_b;
        u8 rsvd_1;/* ch_corr_msb; */
        u8 noise_power_db_msb;
-       char path_cfotail[2];
+       s8 path_cfotail[2];
        u8 pcts_mask[2];
-       char stream_rxevm[2];
+       s8 stream_rxevm[2];
        u8 path_rxsnr[2];
        u8 noise_power_db_lsb;
        u8 rsvd_2[3];
@@ -422,8 +422,8 @@ struct rx_fwinfo_8723be {
        u8 pwdb_all;
        u8 cfosho[4];
        u8 cfotail[4];
-       char rxevm[2];
-       char rxsnr[2];
+       s8 rxevm[2];
+       s8 rxsnr[2];
        u8 pcts_msk_rpt[2];
        u8 pdsnr[2];
        u8 csi_current[2];
index 17a6817..bdfd444 100644 (file)
@@ -843,7 +843,7 @@ static void rtl8821ae_dm_dig(struct ieee80211_hw *hw)
                                dm_digtable->rssi_val_min + offset;
 
                RT_TRACE(rtlpriv, COMP_DIG, DBG_LOUD,
-                        "dm_digtable->rssi_val_min=0x%x,dm_digtable->rx_gain_max = 0x%x",
+                        "dm_digtable->rssi_val_min=0x%x,dm_digtable->rx_gain_max = 0x%x\n",
                         dm_digtable->rssi_val_min,
                         dm_digtable->rx_gain_max);
                if (rtlpriv->dm.one_entry_only) {
@@ -1355,7 +1355,7 @@ void rtl8812ae_dm_txpwr_track_set_pwr(struct ieee80211_hw *hw,
        u32 final_swing_idx[2];
        u8 pwr_tracking_limit = 26; /*+1.0dB*/
        u8 tx_rate = 0xFF;
-       char final_ofdm_swing_index = 0;
+       s8 final_ofdm_swing_index = 0;
 
        if (rtldm->tx_rate != 0xFF)
                tx_rate =
@@ -2045,7 +2045,7 @@ void rtl8821ae_dm_txpwr_track_set_pwr(struct ieee80211_hw *hw,
        u32 final_swing_idx[1];
        u8 pwr_tracking_limit = 26; /*+1.0dB*/
        u8 tx_rate = 0xFF;
-       char final_ofdm_swing_index = 0;
+       s8 final_ofdm_swing_index = 0;
 
        if (rtldm->tx_rate != 0xFF)
                tx_rate = rtl8821ae_hw_rate_to_mrate(hw, rtldm->tx_rate);
@@ -2682,9 +2682,9 @@ static void rtl8821ae_dm_check_edca_turbo(struct ieee80211_hw *hw)
        bool b_edca_turbo_on = false;
 
        RT_TRACE(rtlpriv, COMP_TURBO, DBG_LOUD,
-                "rtl8821ae_dm_check_edca_turbo=====>");
+                "rtl8821ae_dm_check_edca_turbo=====>\n");
        RT_TRACE(rtlpriv, COMP_TURBO, DBG_LOUD,
-                "Orginial BE PARAM: 0x%x\n",
+                "Original BE PARAM: 0x%x\n",
                 rtl_read_dword(rtlpriv, DM_REG_EDCA_BE_11N));
 
        if (rtlpriv->dm.dbginfo.num_non_be_pkt > 0x100)
@@ -2949,6 +2949,7 @@ void rtl8821ae_dm_watchdog(struct ieee80211_hw *hw)
        if (ppsc->p2p_ps_info.p2p_ps_mode)
                fw_ps_awake = false;
 
+       spin_lock(&rtlpriv->locks.rf_ps_lock);
        if ((ppsc->rfpwr_state == ERFON) &&
            ((!fw_current_inpsmode) && fw_ps_awake) &&
            (!ppsc->rfchange_inprogress)) {
@@ -2967,6 +2968,7 @@ void rtl8821ae_dm_watchdog(struct ieee80211_hw *hw)
                        rtl8821ae_dm_check_txpower_tracking_thermalmeter(hw);
                rtl8821ae_dm_iq_calibrate(hw);
        }
+       spin_unlock(&rtlpriv->locks.rf_ps_lock);
 
        rtlpriv->dm.dbginfo.num_qry_beacon_pkt = 0;
        RT_TRACE(rtlpriv, COMP_DIG, DBG_DMESG, "\n");
index b9436df..0cddf1a 100644 (file)
@@ -3101,86 +3101,22 @@ static void _rtl8821ae_read_adapter_info(struct ieee80211_hw *hw, bool b_pseudo_
        struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
        struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
        struct rtl_pci_priv *pcipriv = rtl_pcipriv(hw);
-       struct device *dev = &rtl_pcipriv(hw)->dev.pdev->dev;
-       u16 i, usvalue;
-       u8 hwinfo[HWSET_MAX_SIZE];
-       u16 eeprom_id;
+       int params[] = {RTL_EEPROM_ID, EEPROM_VID, EEPROM_DID,
+                       EEPROM_SVID, EEPROM_SMID, EEPROM_MAC_ADDR,
+                       EEPROM_CHANNELPLAN, EEPROM_VERSION, EEPROM_CUSTOMER_ID,
+                       COUNTRY_CODE_WORLD_WIDE_13};
+       u8 *hwinfo;
 
        if (b_pseudo_test) {
                ;/* need add */
        }
 
-       switch (rtlefuse->epromtype) {
-       case EEPROM_BOOT_EFUSE:
-               rtl_efuse_shadow_map_update(hw);
-               break;
-
-       case EEPROM_93C46:
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "RTL819X Not boot from eeprom, check it !!");
-               return;
-
-       default:
-               dev_warn(dev, "no efuse data\n");
-       }
-       memcpy(hwinfo, &rtlefuse->efuse_map[EFUSE_INIT_MAP][0], HWSET_MAX_SIZE);
-
-       RT_PRINT_DATA(rtlpriv, COMP_INIT, DBG_DMESG, "MAP\n",
-                     hwinfo, HWSET_MAX_SIZE);
-
-       eeprom_id = *((u16 *)&hwinfo[0]);
-       if (eeprom_id != RTL_EEPROM_ID) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_WARNING,
-                        "EEPROM ID(%#x) is invalid!!\n", eeprom_id);
-               rtlefuse->autoload_failflag = true;
-       } else {
-               RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD, "Autoload OK\n");
-               rtlefuse->autoload_failflag = false;
-       }
-
-       if (rtlefuse->autoload_failflag) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
-                        "RTL8812AE autoload_failflag, check it !!");
+       hwinfo = kzalloc(HWSET_MAX_SIZE, GFP_KERNEL);
+       if (!hwinfo)
                return;
-       }
-
-       rtlefuse->eeprom_version = *(u8 *)&hwinfo[EEPROM_VERSION];
-       if (rtlefuse->eeprom_version == 0xff)
-                       rtlefuse->eeprom_version = 0;
-
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM version: 0x%2x\n", rtlefuse->eeprom_version);
-
-       rtlefuse->eeprom_vid = *(u16 *)&hwinfo[EEPROM_VID];
-       rtlefuse->eeprom_did = *(u16 *)&hwinfo[EEPROM_DID];
-       rtlefuse->eeprom_svid = *(u16 *)&hwinfo[EEPROM_SVID];
-       rtlefuse->eeprom_smid = *(u16 *)&hwinfo[EEPROM_SMID];
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROMId = 0x%4x\n", eeprom_id);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM VID = 0x%4x\n", rtlefuse->eeprom_vid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM DID = 0x%4x\n", rtlefuse->eeprom_did);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SVID = 0x%4x\n", rtlefuse->eeprom_svid);
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM SMID = 0x%4x\n", rtlefuse->eeprom_smid);
-
-       /*customer ID*/
-       rtlefuse->eeprom_oemid = *(u8 *)&hwinfo[EEPROM_CUSTOMER_ID];
-       if (rtlefuse->eeprom_oemid == 0xFF)
-               rtlefuse->eeprom_oemid = 0;
-
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_LOUD,
-                "EEPROM Customer ID: 0x%2x\n", rtlefuse->eeprom_oemid);
-
-       for (i = 0; i < 6; i += 2) {
-               usvalue = *(u16 *)&hwinfo[EEPROM_MAC_ADDR + i];
-               *((u16 *)(&rtlefuse->dev_addr[i])) = usvalue;
-       }
 
-       RT_TRACE(rtlpriv, COMP_INIT, DBG_DMESG,
-                "dev_addr: %pM\n", rtlefuse->dev_addr);
+       if (rtl_get_hwinfo(hw, rtlpriv, HWSET_MAX_SIZE, hwinfo, params))
+               goto exit;
 
        _rtl8821ae_read_txpower_info_from_hwpg(hw, rtlefuse->autoload_failflag,
                                               hwinfo);
@@ -3280,6 +3216,8 @@ static void _rtl8821ae_read_adapter_info(struct ieee80211_hw *hw, bool b_pseudo_
                        break;
                }
        }
+exit:
+       kfree(hwinfo);
 }
 
 /*static void _rtl8821ae_hal_customized_behavior(struct ieee80211_hw *hw)
@@ -3836,7 +3774,7 @@ void rtl8821ae_update_hal_rate_tbl(struct ieee80211_hw *hw,
                rtl8821ae_update_hal_rate_mask(hw, sta, rssi_level);
        else
                /*RT_TRACE(rtlpriv, COMP_RATR,DBG_LOUD,
-                          "rtl8821ae_update_hal_rate_tbl() Error! 8821ae FW RA Only");*/
+                          "rtl8821ae_update_hal_rate_tbl() Error! 8821ae FW RA Only\n");*/
                rtl8821ae_update_hal_rate_table(hw, sta);
 }
 
index 0c3b9ce..a71bfe3 100644 (file)
@@ -366,12 +366,12 @@ u32 phy_get_tx_swing_8812A(struct ieee80211_hw *hw, u8    band,
        struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
        struct rtl_dm *rtldm = rtl_dm(rtlpriv);
        struct rtl_efuse *rtlefuse = rtl_efuse(rtl_priv(hw));
-       char reg_swing_2g = -1;/* 0xff; */
-       char reg_swing_5g = -1;/* 0xff; */
-       char swing_2g = -1 * reg_swing_2g;
-       char swing_5g = -1 * reg_swing_5g;
+       s8 reg_swing_2g = -1;/* 0xff; */
+       s8 reg_swing_5g = -1;/* 0xff; */
+       s8 swing_2g = -1 * reg_swing_2g;
+       s8 swing_5g = -1 * reg_swing_5g;
        u32  out = 0x200;
-       const char auto_temp = -1;
+       const s8 auto_temp = -1;
 
        RT_TRACE(rtlpriv, COMP_SCAN, DBG_LOUD,
                 "===> PHY_GetTxBBSwing_8812A, bbSwing_2G: %d, bbSwing_5G: %d,autoload_failflag=%d.\n",
@@ -524,7 +524,7 @@ void rtl8821ae_phy_switch_wirelessband(struct ieee80211_hw *hw, u8 band)
        struct rtl_dm *rtldm = rtl_dm(rtlpriv);
        u8 current_band = rtlhal->current_bandtype;
        u32 txpath, rxpath;
-       char bb_diff_between_band;
+       s8 bb_diff_between_band;
 
        txpath = rtl8821ae_phy_query_bb_reg(hw, RTXPATH, 0xf0);
        rxpath = rtl8821ae_phy_query_bb_reg(hw, RCCK_RX, 0x0f000000);
@@ -581,7 +581,7 @@ void rtl8821ae_phy_switch_wirelessband(struct ieee80211_hw *hw, u8 band)
                count = 0;
                reg_41a = rtl_read_word(rtlpriv, REG_TXPKT_EMPTY);
                RT_TRACE(rtlpriv, COMP_SCAN, DBG_LOUD,
-                        "Reg41A value %d", reg_41a);
+                        "Reg41A value %d\n", reg_41a);
                reg_41a &= 0x30;
                while ((reg_41a != 0x30) && (count < 50)) {
                        udelay(50);
@@ -591,7 +591,7 @@ void rtl8821ae_phy_switch_wirelessband(struct ieee80211_hw *hw, u8 band)
                        reg_41a &= 0x30;
                        count++;
                        RT_TRACE(rtlpriv, COMP_SCAN, DBG_LOUD,
-                                "Reg41A value %d", reg_41a);
+                                "Reg41A value %d\n", reg_41a);
                }
                if (count != 0)
                        RT_TRACE(rtlpriv, COMP_MLME, DBG_LOUD,
@@ -986,7 +986,7 @@ static void _rtl8812ae_phy_cross_reference_ht_and_vht_txpower_limit(struct ieee8
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_phy *rtlphy = &rtlpriv->phy;
        u8 regulation, bw, channel, rate_section;
-       char temp_pwrlmt = 0;
+       s8 temp_pwrlmt = 0;
 
        for (regulation = 0; regulation < MAX_REGULATION_NUM; ++regulation) {
                for (bw = 0; bw < MAX_5G_BANDWITH_NUM; ++bw) {
@@ -1013,7 +1013,7 @@ static void _rtl8812ae_phy_cross_reference_ht_and_vht_txpower_limit(struct ieee8
                                                                        rtlphy->txpwr_limit_5g[regulation][bw][3][channel][RF90_PATH_A];
                                                        }
 
-                                                       RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE, "use other value %d", temp_pwrlmt);
+                                                       RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE, "use other value %d\n", temp_pwrlmt);
                                                }
                                        }
                                }
@@ -1155,7 +1155,7 @@ static void _rtl8812ae_phy_convert_txpower_limit_to_power_index(struct ieee80211
        u8 regulation, bw, channel, rate_section;
        u8 base_index2_4G = 0;
        u8 base_index5G = 0;
-       char temp_value = 0, temp_pwrlmt = 0;
+       s8 temp_value = 0, temp_pwrlmt = 0;
        u8 rf_path = 0;
 
        RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE,
@@ -1467,11 +1467,11 @@ static bool _rtl8812ae_eq_n_byte(u8 *str1, u8 *str2, u32 num)
        return true;
 }
 
-static char _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(struct ieee80211_hw *hw,
+static s8 _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(struct ieee80211_hw *hw,
                                              u8 band, u8 channel)
 {
        struct rtl_priv *rtlpriv = rtl_priv(hw);
-       char channel_index = -1;
+       s8 channel_index = -1;
        u8  i = 0;
 
        if (band == BAND_ON_2_4G)
@@ -1482,12 +1482,12 @@ static char _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(struct ieee80211_hw *hw,
                                channel_index = i;
                }
        } else
-               RT_TRACE(rtlpriv, COMP_POWER, DBG_LOUD, "Invalid Band %d in %s",
+               RT_TRACE(rtlpriv, COMP_POWER, DBG_LOUD, "Invalid Band %d in %s\n",
                         band,  __func__);
 
        if (channel_index == -1)
                RT_TRACE(rtlpriv, COMP_POWER, DBG_LOUD,
-                        "Invalid Channel %d of Band %d in %s", channel,
+                        "Invalid Channel %d of Band %d in %s\n", channel,
                         band, __func__);
 
        return channel_index;
@@ -1502,7 +1502,7 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw, u8 *pregul
        struct rtl_phy *rtlphy = &rtlpriv->phy;
        u8 regulation = 0, bandwidth = 0, rate_section = 0, channel;
        u8 channel_index;
-       char power_limit = 0, prev_power_limit, ret;
+       s8 power_limit = 0, prev_power_limit, ret;
 
        if (!_rtl8812ae_get_integer_from_string((char *)pchannel, &channel) ||
            !_rtl8812ae_get_integer_from_string((char *)ppower_limit,
@@ -1665,7 +1665,7 @@ static bool _rtl8821ae_phy_bb8821a_config_parafile(struct ieee80211_hw *hw)
        rtstatus = _rtl8821ae_phy_config_bb_with_headerfile(hw,
                                                       BASEBAND_CONFIG_PHY_REG);
        if (rtstatus != true) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "Write BB Reg Fail!!\n");
                return false;
        }
        _rtl8821ae_phy_init_tx_power_by_rate(hw);
@@ -1674,7 +1674,7 @@ static bool _rtl8821ae_phy_bb8821a_config_parafile(struct ieee80211_hw *hw)
                                                    BASEBAND_CONFIG_PHY_REG);
        }
        if (rtstatus != true) {
-               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!");
+               RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "BB_PG Reg Fail!!\n");
                return false;
        }
 
@@ -2254,9 +2254,9 @@ static bool _rtl8821ae_phy_get_chnl_index(u8 channel, u8 *chnl_index)
        return in_24g;
 }
 
-static char _rtl8821ae_phy_get_ratesection_intxpower_byrate(u8 path, u8 rate)
+static s8 _rtl8821ae_phy_get_ratesection_intxpower_byrate(u8 path, u8 rate)
 {
-       char rate_section = 0;
+       s8 rate_section = 0;
        switch (rate) {
        case DESC_RATE1M:
        case DESC_RATE2M:
@@ -2338,9 +2338,9 @@ static char _rtl8821ae_phy_get_ratesection_intxpower_byrate(u8 path, u8 rate)
        return rate_section;
 }
 
-static char _rtl8812ae_phy_get_world_wide_limit(char  *limit_table)
+static s8 _rtl8812ae_phy_get_world_wide_limit(s8  *limit_table)
 {
-       char min = limit_table[0];
+       s8 min = limit_table[0];
        u8 i = 0;
 
        for (i = 0; i < MAX_REGULATION_NUM; ++i) {
@@ -2350,7 +2350,7 @@ static char _rtl8812ae_phy_get_world_wide_limit(char  *limit_table)
        return min;
 }
 
-static char _rtl8812ae_phy_get_txpower_limit(struct ieee80211_hw *hw,
+static s8 _rtl8812ae_phy_get_txpower_limit(struct ieee80211_hw *hw,
                                             u8 band,
                                             enum ht_channel_width bandwidth,
                                             enum radio_path rf_path,
@@ -2362,7 +2362,7 @@ static char _rtl8812ae_phy_get_txpower_limit(struct ieee80211_hw *hw,
        short band_temp = -1, regulation = -1, bandwidth_temp = -1,
                 rate_section = -1, channel_temp = -1;
        u16 bd, regu, bdwidth, sec, chnl;
-       char power_limit = MAX_POWER_INDEX;
+       s8 power_limit = MAX_POWER_INDEX;
 
        if (rtlefuse->eeprom_regulatory == 2)
                return MAX_POWER_INDEX;
@@ -2489,7 +2489,7 @@ static char _rtl8812ae_phy_get_txpower_limit(struct ieee80211_hw *hw,
        chnl = channel_temp;
 
        if (band == BAND_ON_2_4G) {
-               char limits[10] = {0};
+               s8 limits[10] = {0};
                u8 i;
 
                for (i = 0; i < 4; ++i)
@@ -2501,7 +2501,7 @@ static char _rtl8812ae_phy_get_txpower_limit(struct ieee80211_hw *hw,
                        rtlphy->txpwr_limit_2_4g[regu][bdwidth]
                                        [sec][chnl][rf_path];
        } else if (band == BAND_ON_5G) {
-               char limits[10] = {0};
+               s8 limits[10] = {0};
                u8 i;
 
                for (i = 0; i < MAX_REGULATION_NUM; ++i)
@@ -2519,14 +2519,14 @@ static char _rtl8812ae_phy_get_txpower_limit(struct ieee80211_hw *hw,
        return power_limit;
 }
 
-static char _rtl8821ae_phy_get_txpower_by_rate(struct ieee80211_hw *hw,
+static s8 _rtl8821ae_phy_get_txpower_by_rate(struct ieee80211_hw *hw,
                                        u8 band, u8 path, u8 rate)
 {
        struct rtl_priv *rtlpriv = rtl_priv(hw);
        struct rtl_phy *rtlphy = &rtlpriv->phy;
        u8 shift = 0, rate_section, tx_num;
-       char tx_pwr_diff = 0;
-       char limit = 0;
+       s8 tx_pwr_diff = 0;
+       s8 limit = 0;
 
        rate_section = _rtl8821ae_phy_get_ratesection_intxpower_byrate(path, rate);
        tx_num = RF_TX_NUM_NONIMPLEMENT;
@@ -2639,7 +2639,7 @@ static u8 _rtl8821ae_get_txpower_index(struct ieee80211_hw *hw, u8 path,
        u8 index = (channel - 1);
        u8 txpower = 0;
        bool in_24g = false;
-       char powerdiff_byrate = 0;
+       s8 powerdiff_byrate = 0;
 
        if (((rtlhal->current_bandtype == BAND_ON_2_4G) &&
            (channel > 14 || channel < 1)) ||
@@ -4637,7 +4637,7 @@ void rtl8821ae_phy_lc_calibrate(struct ieee80211_hw *hw)
 {
 }
 
-void rtl8821ae_phy_ap_calibrate(struct ieee80211_hw *hw, char delta)
+void rtl8821ae_phy_ap_calibrate(struct ieee80211_hw *hw, s8 delta)
 {
 }
 
index c411f0a..1285e1a 100644 (file)
@@ -236,7 +236,7 @@ void rtl8821ae_phy_iq_calibrate(struct ieee80211_hw *hw,
                                bool b_recovery);
 void rtl8812ae_phy_iq_calibrate(struct ieee80211_hw *hw,
                                bool b_recovery);
-void rtl8821ae_phy_ap_calibrate(struct ieee80211_hw *hw, char delta);
+void rtl8821ae_phy_ap_calibrate(struct ieee80211_hw *hw, s8 delta);
 void rtl8821ae_phy_lc_calibrate(struct ieee80211_hw *hw);
 void rtl8821ae_phy_set_rfpath_switch(struct ieee80211_hw *hw, bool bmain);
 bool rtl8812ae_phy_config_rf_with_headerfile(struct ieee80211_hw *hw,
index 2922538..c6ab957 100644 (file)
@@ -454,7 +454,7 @@ static bool _rtl8821ae_phy_rf6052_config_parafile(struct ieee80211_hw *hw)
 
                if (!rtstatus) {
                        RT_TRACE(rtlpriv, COMP_INIT, DBG_TRACE,
-                                "Radio[%d] Fail!!", rfpath);
+                                "Radio[%d] Fail!!\n", rfpath);
                        return false;
                }
        }
index 41efaa1..2772718 100644 (file)
@@ -48,7 +48,7 @@ static u8 _rtl8821ae_map_hwqueue_to_fwqueue(struct sk_buff *skb, u8 hw_queue)
        return skb->priority;
 }
 
-static u16 odm_cfo(char value)
+static u16 odm_cfo(s8 value)
 {
        int ret_val;
 
@@ -64,9 +64,9 @@ static u16 odm_cfo(char value)
        return ret_val;
 }
 
-static u8 _rtl8821ae_evm_dbm_jaguar(char value)
+static u8 _rtl8821ae_evm_dbm_jaguar(s8 value)
 {
-       char ret_val = value;
+       s8 ret_val = value;
 
        /* -33dB~0dB to 33dB ~ 0dB*/
        if (ret_val == -128)
@@ -88,7 +88,7 @@ static void query_rxphystatus(struct ieee80211_hw *hw,
        struct phy_status_rpt *p_phystrpt = (struct phy_status_rpt *)p_drvinfo;
        struct rtl_dm *rtldm = rtl_dm(rtl_priv(hw));
        struct rtl_phy *rtlphy = &rtlpriv->phy;
-       char rx_pwr_all = 0, rx_pwr[4];
+       s8 rx_pwr_all = 0, rx_pwr[4];
        u8 rf_rx_num = 0, evm, evmdbm, pwdb_all;
        u8 i, max_spatial_stream;
        u32 rssi, total_rssi = 0;
@@ -170,7 +170,7 @@ static void query_rxphystatus(struct ieee80211_hw *hw,
                                        pwdb_all = 100;
                        }
                } else { /* 8821 */
-                       char pout = -6;
+                       s8 pout = -6;
 
                        switch (lan_idx) {
                        case 5:
@@ -275,7 +275,7 @@ static void query_rxphystatus(struct ieee80211_hw *hw,
                if (bpacket_match_bssid) {
                        for (i = RF90_PATH_A; i <= RF90_PATH_B; i++)
                                rtl_priv(hw)->dm.cfo_tail[i] =
-                                       (char)p_phystrpt->cfotail[i];
+                                       (s8)p_phystrpt->cfotail[i];
 
                        rtl_priv(hw)->dm.packet_count++;
                }
@@ -716,7 +716,7 @@ void rtl8821ae_tx_fill_desc(struct ieee80211_hw *hw,
                                 PCI_DMA_TODEVICE);
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, sizeof(struct tx_desc_8821ae));
@@ -857,7 +857,7 @@ void rtl8821ae_tx_fill_cmddesc(struct ieee80211_hw *hw,
 
        if (pci_dma_mapping_error(rtlpci->pdev, mapping)) {
                RT_TRACE(rtlpriv, COMP_SEND, DBG_TRACE,
-                        "DMA mapping error");
+                        "DMA mapping error\n");
                return;
        }
        CLEAR_PCI_TX_DESC_CONTENT(pdesc, TX_DESC_SIZE);
index ad565be..b6f3c56 100644 (file)
@@ -390,11 +390,11 @@ struct phy_status_rpt {
        u8 cfosho[4];   /* DW 1 byte 1 DW 2 byte 0 */
 
        /* DWORD 2 */
-       char cfotail[4];        /* DW 2 byte 1 DW 3 byte 0 */
+       s8 cfotail[4];  /* DW 2 byte 1 DW 3 byte 0 */
 
        /* DWORD 3 */
-       char rxevm[2];  /* DW 3 byte 1 DW 3 byte 2 */
-       char rxsnr[2];  /* DW 3 byte 3 DW 4 byte 0 */
+       s8 rxevm[2];    /* DW 3 byte 1 DW 3 byte 2 */
+       s8 rxsnr[2];    /* DW 3 byte 3 DW 4 byte 0 */
 
        /* DWORD 4 */
        u8 pcts_msk_rpt[2];
@@ -418,8 +418,8 @@ struct rx_fwinfo_8821ae {
        u8 pwdb_all;
        u8 cfosho[4];
        u8 cfotail[4];
-       char rxevm[2];
-       char rxsnr[4];
+       s8 rxevm[2];
+       s8 rxsnr[4];
        u8 pdsnr[2];
        u8 csi_current[2];
        u8 csi_target[2];
index d8b3069..61700fa 100644 (file)
@@ -26,7 +26,7 @@
 #include "stats.h"
 #include <linux/export.h>
 
-u8 rtl_query_rxpwrpercentage(char antpower)
+u8 rtl_query_rxpwrpercentage(s8 antpower)
 {
        if ((antpower <= -100) || (antpower >= 20))
                return 0;
@@ -37,9 +37,9 @@ u8 rtl_query_rxpwrpercentage(char antpower)
 }
 EXPORT_SYMBOL(rtl_query_rxpwrpercentage);
 
-u8 rtl_evm_db_to_percentage(char value)
+u8 rtl_evm_db_to_percentage(s8 value)
 {
-       char ret_val = clamp(-value, 0, 33) * 3;
+       s8 ret_val = clamp(-value, 0, 33) * 3;
 
        if (ret_val == 99)
                ret_val = 100;
index 2b57dff..bd0108f 100644 (file)
@@ -33,8 +33,8 @@
 /* Rx smooth factor */
 #define        RX_SMOOTH_FACTOR                        20
 
-u8 rtl_query_rxpwrpercentage(char antpower);
-u8 rtl_evm_db_to_percentage(char value);
+u8 rtl_query_rxpwrpercentage(s8 antpower);
+u8 rtl_evm_db_to_percentage(s8 value);
 long rtl_signal_scale_mapping(struct ieee80211_hw *hw, long currsig);
 void rtl_process_phyinfo(struct ieee80211_hw *hw, u8 *buffer,
                         struct rtl_stats *pstatus);
index 4e0ab4d..c5086c2 100644 (file)
@@ -1089,7 +1089,7 @@ struct dynamic_primary_cca {
 };
 
 struct rtl_regulatory {
-       char alpha2[2];
+       s8 alpha2[2];
        u16 country_code;
        u16 max_power_level;
        u32 tp_scale;
@@ -1256,16 +1256,16 @@ struct rtl_phy {
        u8 cur_bw20_txpwridx;
        u8 cur_bw40_txpwridx;
 
-       char txpwr_limit_2_4g[MAX_REGULATION_NUM]
-                            [MAX_2_4G_BANDWITH_NUM]
-                            [MAX_RATE_SECTION_NUM]
-                            [CHANNEL_MAX_NUMBER_2G]
-                            [MAX_RF_PATH_NUM];
-       char txpwr_limit_5g[MAX_REGULATION_NUM]
-                          [MAX_5G_BANDWITH_NUM]
+       s8 txpwr_limit_2_4g[MAX_REGULATION_NUM]
+                          [MAX_2_4G_BANDWITH_NUM]
                           [MAX_RATE_SECTION_NUM]
-                          [CHANNEL_MAX_NUMBER_5G]
+                          [CHANNEL_MAX_NUMBER_2G]
                           [MAX_RF_PATH_NUM];
+       s8 txpwr_limit_5g[MAX_REGULATION_NUM]
+                        [MAX_5G_BANDWITH_NUM]
+                        [MAX_RATE_SECTION_NUM]
+                        [CHANNEL_MAX_NUMBER_5G]
+                        [MAX_RF_PATH_NUM];
 
        u32 rfreg_chnlval[2];
        bool apk_done;
@@ -1639,7 +1639,7 @@ struct fast_ant_training {
 };
 
 struct dm_phy_dbg_info {
-       char rx_snrdb[4];
+       s8 rx_snrdb[4];
        u64 num_qry_phy_status;
        u64 num_qry_phy_status_cck;
        u64 num_qry_phy_status_ofdm;
@@ -1688,16 +1688,16 @@ struct rtl_dm {
        u8 txpower_track_control;
        bool interrupt_migration;
        bool disable_tx_int;
-       char ofdm_index[MAX_RF_PATH];
+       s8 ofdm_index[MAX_RF_PATH];
        u8 default_ofdm_index;
        u8 default_cck_index;
-       char cck_index;
-       char delta_power_index[MAX_RF_PATH];
-       char delta_power_index_last[MAX_RF_PATH];
-       char power_index_offset[MAX_RF_PATH];
-       char absolute_ofdm_swing_idx[MAX_RF_PATH];
-       char remnant_ofdm_swing_idx[MAX_RF_PATH];
-       char remnant_cck_idx;
+       s8 cck_index;
+       s8 delta_power_index[MAX_RF_PATH];
+       s8 delta_power_index_last[MAX_RF_PATH];
+       s8 power_index_offset[MAX_RF_PATH];
+       s8 absolute_ofdm_swing_idx[MAX_RF_PATH];
+       s8 remnant_ofdm_swing_idx[MAX_RF_PATH];
+       s8 remnant_cck_idx;
        bool modify_txagc_flag_path_a;
        bool modify_txagc_flag_path_b;
 
@@ -1726,8 +1726,8 @@ struct rtl_dm {
        u8      swing_idx_cck_base;
        bool    swing_flag_cck;
 
-       char    swing_diff_2g;
-       char    swing_diff_5g;
+       s8      swing_diff_2g;
+       s8      swing_diff_5g;
 
        u8 delta_swing_table_idx_24gccka_p[DEL_SW_IDX_SZ];
        u8 delta_swing_table_idx_24gccka_n[DEL_SW_IDX_SZ];
@@ -1838,17 +1838,17 @@ struct rtl_efuse {
         *
         * Sizes of these arrays are decided by the larger ones.
         */
-       char txpwr_cckdiff[MAX_RF_PATH][CHANNEL_MAX_NUMBER];
-       char txpwr_ht20diff[MAX_RF_PATH][CHANNEL_MAX_NUMBER];
-       char txpwr_ht40diff[MAX_RF_PATH][CHANNEL_MAX_NUMBER];
-       char txpwr_legacyhtdiff[MAX_RF_PATH][CHANNEL_MAX_NUMBER];
+       s8 txpwr_cckdiff[MAX_RF_PATH][CHANNEL_MAX_NUMBER];
+       s8 txpwr_ht20diff[MAX_RF_PATH][CHANNEL_MAX_NUMBER];
+       s8 txpwr_ht40diff[MAX_RF_PATH][CHANNEL_MAX_NUMBER];
+       s8 txpwr_legacyhtdiff[MAX_RF_PATH][CHANNEL_MAX_NUMBER];
 
        u8 txpwr_5g_bw40base[MAX_RF_PATH][CHANNEL_MAX_NUMBER];
        u8 txpwr_5g_bw80base[MAX_RF_PATH][CHANNEL_MAX_NUMBER_5G_80M];
-       char txpwr_5g_ofdmdiff[MAX_RF_PATH][MAX_TX_COUNT];
-       char txpwr_5g_bw20diff[MAX_RF_PATH][MAX_TX_COUNT];
-       char txpwr_5g_bw40diff[MAX_RF_PATH][MAX_TX_COUNT];
-       char txpwr_5g_bw80diff[MAX_RF_PATH][MAX_TX_COUNT];
+       s8 txpwr_5g_ofdmdiff[MAX_RF_PATH][MAX_TX_COUNT];
+       s8 txpwr_5g_bw20diff[MAX_RF_PATH][MAX_TX_COUNT];
+       s8 txpwr_5g_bw40diff[MAX_RF_PATH][MAX_TX_COUNT];
+       s8 txpwr_5g_bw80diff[MAX_RF_PATH][MAX_TX_COUNT];
 
        u8 txpwr_safetyflag;                    /* Band edge enable flag */
        u16 eeprom_txpowerdiff;
@@ -2006,7 +2006,7 @@ struct rtl_stats {
        bool is_ht;
        bool packet_toself;
        bool packet_beacon;     /*for rssi */
-       char cck_adc_pwdb[4];   /*for rx path selection */
+       s8 cck_adc_pwdb[4];     /*for rx path selection */
 
        bool is_vht;
        bool is_short_gi;
@@ -2413,9 +2413,9 @@ struct dig_t {
        u8 presta_cstate;
        u8 curmultista_cstate;
        u8 stop_dig;
-       char back_val;
-       char back_range_max;
-       char back_range_min;
+       s8 back_val;
+       s8 back_range_max;
+       s8 back_range_min;
        u8 rx_gain_max;
        u8 rx_gain_min;
        u8 min_undec_pwdb_for_dm;
@@ -2441,8 +2441,8 @@ struct dig_t {
        u8 cur_cs_ratiostate;
        u8 pre_cs_ratiostate;
        u8 backoff_enable_flag;
-       char backoffval_range_max;
-       char backoffval_range_min;
+       s8 backoffval_range_max;
+       s8 backoffval_range_min;
        u8 dig_min_0;
        u8 dig_min_1;
        u8 bt30_cur_igi;
index ef81184..2c5df43 100644 (file)
@@ -112,12 +112,18 @@ static int wlcore_smart_config_decode_event(struct wl1271 *wl,
        return 0;
 }
 
-static void wlcore_event_time_sync(struct wl1271 *wl, u16 tsf_msb, u16 tsf_lsb)
+static void wlcore_event_time_sync(struct wl1271 *wl,
+                                  u16 tsf_high_msb, u16 tsf_high_lsb,
+                                  u16 tsf_low_msb, u16 tsf_low_lsb)
 {
-       u32 clock;
-       /* convert the MSB+LSB to a u32 TSF value */
-       clock = (tsf_msb << 16) | tsf_lsb;
-       wl1271_info("TIME_SYNC_EVENT_ID: clock %u", clock);
+       u32 clock_low;
+       u32 clock_high;
+
+       clock_high = (tsf_high_msb << 16) | tsf_high_lsb;
+       clock_low = (tsf_low_msb << 16) | tsf_low_lsb;
+
+       wl1271_info("TIME_SYNC_EVENT_ID: clock_high %u, clock low %u",
+                   clock_high, clock_low);
 }
 
 int wl18xx_process_mailbox_events(struct wl1271 *wl)
@@ -138,8 +144,10 @@ int wl18xx_process_mailbox_events(struct wl1271 *wl)
 
        if (vector & TIME_SYNC_EVENT_ID)
                wlcore_event_time_sync(wl,
-                               mbox->time_sync_tsf_msb,
-                               mbox->time_sync_tsf_lsb);
+                       mbox->time_sync_tsf_high_msb,
+                       mbox->time_sync_tsf_high_lsb,
+                       mbox->time_sync_tsf_low_msb,
+                       mbox->time_sync_tsf_low_lsb);
 
        if (vector & RADAR_DETECTED_EVENT_ID) {
                wl1271_info("radar event: channel %d type %s",
@@ -187,11 +195,11 @@ int wl18xx_process_mailbox_events(struct wl1271 *wl)
         */
        if (vector & MAX_TX_FAILURE_EVENT_ID)
                wlcore_event_max_tx_failure(wl,
-                               le32_to_cpu(mbox->tx_retry_exceeded_bitmap));
+                               le16_to_cpu(mbox->tx_retry_exceeded_bitmap));
 
        if (vector & INACTIVE_STA_EVENT_ID)
                wlcore_event_inactive_sta(wl,
-                               le32_to_cpu(mbox->inactive_sta_bitmap));
+                               le16_to_cpu(mbox->inactive_sta_bitmap));
 
        if (vector & REMAIN_ON_CHANNEL_COMPLETE_EVENT_ID)
                wlcore_event_roc_complete(wl);
index 070de12..ce8ea9c 100644 (file)
@@ -74,10 +74,16 @@ struct wl18xx_event_mailbox {
        __le16 bss_loss_bitmap;
 
        /* bitmap of stations (by HLID) which exceeded max tx retries */
-       __le32 tx_retry_exceeded_bitmap;
+       __le16 tx_retry_exceeded_bitmap;
+
+       /* time sync high msb*/
+       __le16 time_sync_tsf_high_msb;
 
        /* bitmap of inactive stations (by HLID) */
-       __le32 inactive_sta_bitmap;
+       __le16 inactive_sta_bitmap;
+
+       /* time sync high lsb*/
+       __le16 time_sync_tsf_high_lsb;
 
        /* rx BA win size indicated by RX_BA_WIN_SIZE_CHANGE_EVENT_ID */
        u8 rx_ba_role_id;
@@ -98,14 +104,15 @@ struct wl18xx_event_mailbox {
        u8 sc_sync_channel;
        u8 sc_sync_band;
 
-       /* time sync msb*/
-       u16 time_sync_tsf_msb;
+       /* time sync low msb*/
+       __le16 time_sync_tsf_low_msb;
+
        /* radar detect */
        u8 radar_channel;
        u8 radar_type;
 
-       /* time sync lsb*/
-       u16 time_sync_tsf_lsb;
+       /* time sync low lsb*/
+       __le16 time_sync_tsf_low_lsb;
 
 } __packed;
 
index 3315356..5f360ce 100644 (file)
@@ -1566,6 +1566,13 @@ int wl12xx_cmd_add_peer(struct wl1271 *wl, struct wl12xx_vif *wlvif,
                cpu_to_le32(wl1271_tx_enabled_rates_get(wl, sta_rates,
                                                        wlvif->band));
 
+       if (!cmd->supported_rates) {
+               wl1271_debug(DEBUG_CMD,
+                            "peer has no supported rates yet, configuring basic rates: 0x%x",
+                            wlvif->basic_rate_set);
+               cmd->supported_rates = cpu_to_le32(wlvif->basic_rate_set);
+       }
+
        wl1271_debug(DEBUG_CMD, "new peer rates=0x%x queues=0x%x",
                     cmd->supported_rates, sta->uapsd_queues);
 
index 69267d5..9abc152 100644 (file)
@@ -5098,6 +5098,11 @@ static int wl12xx_update_sta_state(struct wl1271 *wl,
                if (ret < 0)
                        return ret;
 
+               /* reconfigure rates */
+               ret = wl12xx_cmd_add_peer(wl, wlvif, sta, wl_sta->hlid);
+               if (ret < 0)
+                       return ret;
+
                ret = wl1271_acx_set_ht_capabilities(wl, &sta->ht_cap, true,
                                                     wl_sta->hlid);
                if (ret)
index c172da5..5839acb 100644 (file)
@@ -241,7 +241,6 @@ static int wlcore_probe_of(struct device *dev, int *irq,
        *irq = irq_of_parse_and_map(np, 0);
        if (!*irq) {
                dev_err(dev, "No irq in platform data\n");
-               kfree(pdev_data);
                return -EINVAL;
        }
 
index ea8321a..9d23692 100644 (file)
@@ -40,6 +40,7 @@ config NFC_MEI_PHY
 
 config NFC_SIM
        tristate "NFC hardware simulator driver"
+       depends on NFC_DIGITAL
        help
          This driver declares two virtual NFC devices supporting NFC-DEP
          protocol. An LLCP connection can be established between them and
index e44a7a2..7c1eaea 100644 (file)
@@ -345,7 +345,7 @@ static void fdp_nci_release_firmware(struct nci_dev *ndev)
 
        if (info->ram_patch) {
                release_firmware(info->ram_patch);
-               info->otp_patch = NULL;
+               info->ram_patch = NULL;
        }
 }
 
@@ -353,7 +353,7 @@ static int fdp_nci_patch_otp(struct nci_dev *ndev)
 {
        struct fdp_nci_info *info = nci_get_drvdata(ndev);
        struct device *dev = &info->phy->i2c_dev->dev;
-       u8 conn_id;
+       int conn_id;
        int r = 0;
 
        if (info->otp_version >= info->otp_patch_version)
@@ -424,7 +424,7 @@ static int fdp_nci_patch_ram(struct nci_dev *ndev)
 {
        struct fdp_nci_info *info = nci_get_drvdata(ndev);
        struct device *dev = &info->phy->i2c_dev->dev;
-       u8 conn_id;
+       int conn_id;
        int r = 0;
 
        if (info->ram_version >= info->ram_patch_version)
index 93aaca5..a466e79 100644 (file)
 #include <linux/device.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/ctype.h>
+#include <linux/debugfs.h>
 #include <linux/nfc.h>
 #include <net/nfc/nfc.h>
+#include <net/nfc/digital.h>
 
-#define DEV_ERR(_dev, fmt, args...) nfc_err(&_dev->nfc_dev->dev, \
-                                               "%s: " fmt, __func__, ## args)
+#define NFCSIM_ERR(d, fmt, args...) nfc_err(&d->nfc_digital_dev->nfc_dev->dev, \
+                                           "%s: " fmt, __func__, ## args)
 
-#define DEV_DBG(_dev, fmt, args...) dev_dbg(&_dev->nfc_dev->dev, \
-                                               "%s: " fmt, __func__, ## args)
+#define NFCSIM_DBG(d, fmt, args...) dev_dbg(&d->nfc_digital_dev->nfc_dev->dev, \
+                                           "%s: " fmt, __func__, ## args)
 
-#define NFCSIM_VERSION "0.1"
+#define NFCSIM_VERSION "0.2"
 
-#define NFCSIM_POLL_NONE       0
-#define NFCSIM_POLL_INITIATOR  1
-#define NFCSIM_POLL_TARGET     2
-#define NFCSIM_POLL_DUAL       (NFCSIM_POLL_INITIATOR | NFCSIM_POLL_TARGET)
+#define NFCSIM_MODE_NONE       0
+#define NFCSIM_MODE_INITIATOR  1
+#define NFCSIM_MODE_TARGET     2
 
-#define RX_DEFAULT_DELAY       5
+#define NFCSIM_CAPABILITIES (NFC_DIGITAL_DRV_CAPS_IN_CRC   | \
+                            NFC_DIGITAL_DRV_CAPS_TG_CRC)
 
 struct nfcsim {
-       struct nfc_dev *nfc_dev;
+       struct nfc_digital_dev *nfc_digital_dev;
 
-       struct mutex lock;
+       struct work_struct recv_work;
+       struct delayed_work send_work;
 
-       struct delayed_work recv_work;
+       struct nfcsim_link *link_in;
+       struct nfcsim_link *link_out;
 
-       struct sk_buff *clone_skb;
+       bool up;
+       u8 mode;
+       u8 rf_tech;
 
-       struct delayed_work poll_work;
-       u8 polling_mode;
-       u8 curr_polling_mode;
+       u16 recv_timeout;
 
-       u8 shutting_down;
+       nfc_digital_cmd_complete_t cb;
+       void *arg;
 
-       u8 up;
+       u8 dropframe;
+};
 
-       u8 initiator;
+struct nfcsim_link {
+       struct mutex lock;
 
-       u32 rx_delay;
+       u8 rf_tech;
+       u8 mode;
 
-       data_exchange_cb_t cb;
-       void *cb_context;
+       u8 shutdown;
 
-       struct nfcsim *peer_dev;
+       struct sk_buff *skb;
+       wait_queue_head_t recv_wait;
+       u8 cond;
 };
 
-static struct nfcsim *dev0;
-static struct nfcsim *dev1;
-
-static struct workqueue_struct *wq;
-
-static void nfcsim_cleanup_dev(struct nfcsim *dev, u8 shutdown)
+static struct nfcsim_link *nfcsim_link_new(void)
 {
-       DEV_DBG(dev, "shutdown=%d\n", shutdown);
+       struct nfcsim_link *link;
 
-       mutex_lock(&dev->lock);
+       link = kzalloc(sizeof(struct nfcsim_link), GFP_KERNEL);
+       if (!link)
+               return NULL;
 
-       dev->polling_mode = NFCSIM_POLL_NONE;
-       dev->shutting_down = shutdown;
-       dev->cb = NULL;
-       dev_kfree_skb(dev->clone_skb);
-       dev->clone_skb = NULL;
+       mutex_init(&link->lock);
+       init_waitqueue_head(&link->recv_wait);
 
-       mutex_unlock(&dev->lock);
-
-       cancel_delayed_work_sync(&dev->poll_work);
-       cancel_delayed_work_sync(&dev->recv_work);
+       return link;
 }
 
-static int nfcsim_target_found(struct nfcsim *dev)
+static void nfcsim_link_free(struct nfcsim_link *link)
 {
-       struct nfc_target nfc_tgt;
+       dev_kfree_skb(link->skb);
+       kfree(link);
+}
 
-       DEV_DBG(dev, "\n");
+static void nfcsim_link_recv_wake(struct nfcsim_link *link)
+{
+       link->cond = 1;
+       wake_up_interruptible(&link->recv_wait);
+}
 
-       memset(&nfc_tgt, 0, sizeof(struct nfc_target));
+static void nfcsim_link_set_skb(struct nfcsim_link *link, struct sk_buff *skb,
+                               u8 rf_tech, u8 mode)
+{
+       mutex_lock(&link->lock);
 
-       nfc_tgt.supported_protocols = NFC_PROTO_NFC_DEP_MASK;
-       nfc_targets_found(dev->nfc_dev, &nfc_tgt, 1);
+       dev_kfree_skb(link->skb);
+       link->skb = skb;
+       link->rf_tech = rf_tech;
+       link->mode = mode;
 
-       return 0;
+       mutex_unlock(&link->lock);
 }
 
-static int nfcsim_dev_up(struct nfc_dev *nfc_dev)
+static void nfcsim_link_recv_cancel(struct nfcsim_link *link)
 {
-       struct nfcsim *dev = nfc_get_drvdata(nfc_dev);
+       mutex_lock(&link->lock);
 
-       DEV_DBG(dev, "\n");
+       link->mode = NFCSIM_MODE_NONE;
 
-       mutex_lock(&dev->lock);
+       mutex_unlock(&link->lock);
 
-       dev->up = 1;
-
-       mutex_unlock(&dev->lock);
-
-       return 0;
+       nfcsim_link_recv_wake(link);
 }
 
-static int nfcsim_dev_down(struct nfc_dev *nfc_dev)
+static void nfcsim_link_shutdown(struct nfcsim_link *link)
 {
-       struct nfcsim *dev = nfc_get_drvdata(nfc_dev);
-
-       DEV_DBG(dev, "\n");
+       mutex_lock(&link->lock);
 
-       mutex_lock(&dev->lock);
+       link->shutdown = 1;
+       link->mode = NFCSIM_MODE_NONE;
 
-       dev->up = 0;
+       mutex_unlock(&link->lock);
 
-       mutex_unlock(&dev->lock);
-
-       return 0;
+       nfcsim_link_recv_wake(link);
 }
 
-static int nfcsim_dep_link_up(struct nfc_dev *nfc_dev,
-                             struct nfc_target *target,
-                             u8 comm_mode, u8 *gb, size_t gb_len)
+static struct sk_buff *nfcsim_link_recv_skb(struct nfcsim_link *link,
+                                           int timeout, u8 rf_tech, u8 mode)
 {
        int rc;
-       struct nfcsim *dev = nfc_get_drvdata(nfc_dev);
-       struct nfcsim *peer = dev->peer_dev;
-       u8 *remote_gb;
-       size_t remote_gb_len;
+       struct sk_buff *skb;
 
-       DEV_DBG(dev, "target_idx: %d, comm_mode: %d\n", target->idx, comm_mode);
+       rc = wait_event_interruptible_timeout(link->recv_wait,
+                                             link->cond,
+                                             msecs_to_jiffies(timeout));
 
-       mutex_lock(&peer->lock);
+       mutex_lock(&link->lock);
 
-       nfc_tm_activated(peer->nfc_dev, NFC_PROTO_NFC_DEP_MASK,
-                        NFC_COMM_ACTIVE, gb, gb_len);
+       skb = link->skb;
+       link->skb = NULL;
 
-       remote_gb = nfc_get_local_general_bytes(peer->nfc_dev, &remote_gb_len);
-       if (!remote_gb) {
-               DEV_ERR(peer, "Can't get remote general bytes\n");
+       if (!rc) {
+               rc = -ETIMEDOUT;
+               goto done;
+       }
 
-               mutex_unlock(&peer->lock);
-               return -EINVAL;
+       if (!skb || link->rf_tech != rf_tech || link->mode == mode) {
+               rc = -EINVAL;
+               goto done;
        }
 
-       mutex_unlock(&peer->lock);
+       if (link->shutdown) {
+               rc = -ENODEV;
+               goto done;
+       }
 
-       mutex_lock(&dev->lock);
+done:
+       mutex_unlock(&link->lock);
 
-       rc = nfc_set_remote_general_bytes(nfc_dev, remote_gb, remote_gb_len);
-       if (rc) {
-               DEV_ERR(dev, "Can't set remote general bytes\n");
-               mutex_unlock(&dev->lock);
-               return rc;
+       if (rc < 0) {
+               dev_kfree_skb(skb);
+               skb = ERR_PTR(rc);
        }
 
-       rc = nfc_dep_link_is_up(nfc_dev, target->idx, NFC_COMM_ACTIVE,
-                               NFC_RF_INITIATOR);
-
-       mutex_unlock(&dev->lock);
+       link->cond = 0;
 
-       return rc;
+       return skb;
 }
 
-static int nfcsim_dep_link_down(struct nfc_dev *nfc_dev)
+static void nfcsim_send_wq(struct work_struct *work)
 {
-       struct nfcsim *dev = nfc_get_drvdata(nfc_dev);
+       struct nfcsim *dev = container_of(work, struct nfcsim, send_work.work);
 
-       DEV_DBG(dev, "\n");
-
-       nfcsim_cleanup_dev(dev, 0);
-
-       return 0;
+       /*
+        * To effectively send data, the device just wake up its link_out which
+        * is the link_in of the peer device. The exchanged skb has already been
+        * stored in the dev->link_out through nfcsim_link_set_skb().
+        */
+       nfcsim_link_recv_wake(dev->link_out);
 }
 
-static int nfcsim_start_poll(struct nfc_dev *nfc_dev,
-                            u32 im_protocols, u32 tm_protocols)
+static void nfcsim_recv_wq(struct work_struct *work)
 {
-       struct nfcsim *dev = nfc_get_drvdata(nfc_dev);
-       int rc;
-
-       mutex_lock(&dev->lock);
+       struct nfcsim *dev = container_of(work, struct nfcsim, recv_work);
+       struct sk_buff *skb;
 
-       if (dev->polling_mode != NFCSIM_POLL_NONE) {
-               DEV_ERR(dev, "Already in polling mode\n");
-               rc = -EBUSY;
-               goto exit;
-       }
+       skb = nfcsim_link_recv_skb(dev->link_in, dev->recv_timeout,
+                                  dev->rf_tech, dev->mode);
 
-       if (im_protocols & NFC_PROTO_NFC_DEP_MASK)
-               dev->polling_mode |= NFCSIM_POLL_INITIATOR;
+       if (!dev->up) {
+               NFCSIM_ERR(dev, "Device is down\n");
 
-       if (tm_protocols & NFC_PROTO_NFC_DEP_MASK)
-               dev->polling_mode |= NFCSIM_POLL_TARGET;
+               if (!IS_ERR(skb))
+                       dev_kfree_skb(skb);
 
-       if (dev->polling_mode == NFCSIM_POLL_NONE) {
-               DEV_ERR(dev, "Unsupported polling mode\n");
-               rc = -EINVAL;
-               goto exit;
+               skb = ERR_PTR(-ENODEV);
        }
 
-       dev->initiator = 0;
-       dev->curr_polling_mode = NFCSIM_POLL_NONE;
+       dev->cb(dev->nfc_digital_dev, dev->arg, skb);
+}
 
-       queue_delayed_work(wq, &dev->poll_work, 0);
+static int nfcsim_send(struct nfc_digital_dev *ddev, struct sk_buff *skb,
+                      u16 timeout, nfc_digital_cmd_complete_t cb, void *arg)
+{
+       struct nfcsim *dev = nfc_digital_get_drvdata(ddev);
+       u8 delay;
 
-       DEV_DBG(dev, "Start polling: im: 0x%X, tm: 0x%X\n", im_protocols,
-               tm_protocols);
+       if (!dev->up) {
+               NFCSIM_ERR(dev, "Device is down\n");
+               return -ENODEV;
+       }
 
-       rc = 0;
-exit:
-       mutex_unlock(&dev->lock);
+       dev->recv_timeout = timeout;
+       dev->cb = cb;
+       dev->arg = arg;
 
-       return rc;
-}
+       schedule_work(&dev->recv_work);
 
-static void nfcsim_stop_poll(struct nfc_dev *nfc_dev)
-{
-       struct nfcsim *dev = nfc_get_drvdata(nfc_dev);
+       if (dev->dropframe) {
+               NFCSIM_DBG(dev, "dropping frame (out of %d)\n", dev->dropframe);
+               dev_kfree_skb(skb);
+               dev->dropframe--;
 
-       DEV_DBG(dev, "Stop poll\n");
+               return 0;
+       }
 
-       mutex_lock(&dev->lock);
+       if (skb) {
+               nfcsim_link_set_skb(dev->link_out, skb, dev->rf_tech,
+                                   dev->mode);
 
-       dev->polling_mode = NFCSIM_POLL_NONE;
+               /* Add random delay (between 3 and 10 ms) before sending data */
+               get_random_bytes(&delay, 1);
+               delay = 3 + (delay & 0x07);
 
-       mutex_unlock(&dev->lock);
+               schedule_delayed_work(&dev->send_work, msecs_to_jiffies(delay));
+       }
 
-       cancel_delayed_work_sync(&dev->poll_work);
+       return 0;
 }
 
-static int nfcsim_activate_target(struct nfc_dev *nfc_dev,
-                                 struct nfc_target *target, u32 protocol)
+static void nfcsim_abort_cmd(struct nfc_digital_dev *ddev)
 {
-       struct nfcsim *dev = nfc_get_drvdata(nfc_dev);
-
-       DEV_DBG(dev, "\n");
+       struct nfcsim *dev = nfc_digital_get_drvdata(ddev);
 
-       return -ENOTSUPP;
+       nfcsim_link_recv_cancel(dev->link_in);
 }
 
-static void nfcsim_deactivate_target(struct nfc_dev *nfc_dev,
-                                    struct nfc_target *target, u8 mode)
+static int nfcsim_switch_rf(struct nfc_digital_dev *ddev, bool on)
 {
-       struct nfcsim *dev = nfc_get_drvdata(nfc_dev);
+       struct nfcsim *dev = nfc_digital_get_drvdata(ddev);
+
+       dev->up = on;
 
-       DEV_DBG(dev, "\n");
+       return 0;
 }
 
-static void nfcsim_wq_recv(struct work_struct *work)
+static int nfcsim_in_configure_hw(struct nfc_digital_dev *ddev,
+                                         int type, int param)
 {
-       struct nfcsim *dev = container_of(work, struct nfcsim,
-                                         recv_work.work);
+       struct nfcsim *dev = nfc_digital_get_drvdata(ddev);
 
-       mutex_lock(&dev->lock);
+       switch (type) {
+       case NFC_DIGITAL_CONFIG_RF_TECH:
+               dev->up = true;
+               dev->mode = NFCSIM_MODE_INITIATOR;
+               dev->rf_tech = param;
+               break;
 
-       if (dev->shutting_down || !dev->up || !dev->clone_skb) {
-               dev_kfree_skb(dev->clone_skb);
-               goto exit;
-       }
+       case NFC_DIGITAL_CONFIG_FRAMING:
+               break;
 
-       if (dev->initiator) {
-               if (!dev->cb) {
-                       DEV_ERR(dev, "Null recv callback\n");
-                       dev_kfree_skb(dev->clone_skb);
-                       goto exit;
-               }
-
-               dev->cb(dev->cb_context, dev->clone_skb, 0);
-               dev->cb = NULL;
-       } else {
-               nfc_tm_data_received(dev->nfc_dev, dev->clone_skb);
+       default:
+               NFCSIM_ERR(dev, "Invalid configuration type: %d\n", type);
+               return -EINVAL;
        }
 
-exit:
-       dev->clone_skb = NULL;
+       return 0;
+}
 
-       mutex_unlock(&dev->lock);
+static int nfcsim_in_send_cmd(struct nfc_digital_dev *ddev,
+                              struct sk_buff *skb, u16 timeout,
+                              nfc_digital_cmd_complete_t cb, void *arg)
+{
+       return nfcsim_send(ddev, skb, timeout, cb, arg);
 }
 
-static int nfcsim_tx(struct nfc_dev *nfc_dev, struct nfc_target *target,
-                    struct sk_buff *skb, data_exchange_cb_t cb,
-                    void *cb_context)
+static int nfcsim_tg_configure_hw(struct nfc_digital_dev *ddev,
+                                         int type, int param)
 {
-       struct nfcsim *dev = nfc_get_drvdata(nfc_dev);
-       struct nfcsim *peer = dev->peer_dev;
-       int err;
+       struct nfcsim *dev = nfc_digital_get_drvdata(ddev);
 
-       mutex_lock(&dev->lock);
+       switch (type) {
+       case NFC_DIGITAL_CONFIG_RF_TECH:
+               dev->up = true;
+               dev->mode = NFCSIM_MODE_TARGET;
+               dev->rf_tech = param;
+               break;
 
-       if (dev->shutting_down || !dev->up) {
-               mutex_unlock(&dev->lock);
-               err = -ENODEV;
-               goto exit;
+       case NFC_DIGITAL_CONFIG_FRAMING:
+               break;
+
+       default:
+               NFCSIM_ERR(dev, "Invalid configuration type: %d\n", type);
+               return -EINVAL;
        }
 
-       dev->cb = cb;
-       dev->cb_context = cb_context;
+       return 0;
+}
 
-       mutex_unlock(&dev->lock);
+static int nfcsim_tg_send_cmd(struct nfc_digital_dev *ddev,
+                              struct sk_buff *skb, u16 timeout,
+                              nfc_digital_cmd_complete_t cb, void *arg)
+{
+       return nfcsim_send(ddev, skb, timeout, cb, arg);
+}
 
-       mutex_lock(&peer->lock);
+static int nfcsim_tg_listen(struct nfc_digital_dev *ddev, u16 timeout,
+                           nfc_digital_cmd_complete_t cb, void *arg)
+{
+       return nfcsim_send(ddev, NULL, timeout, cb, arg);
+}
 
-       peer->clone_skb = skb_clone(skb, GFP_KERNEL);
+static struct nfc_digital_ops nfcsim_digital_ops = {
+       .in_configure_hw = nfcsim_in_configure_hw,
+       .in_send_cmd = nfcsim_in_send_cmd,
 
-       if (!peer->clone_skb) {
-               DEV_ERR(dev, "skb_clone failed\n");
-               mutex_unlock(&peer->lock);
-               err = -ENOMEM;
-               goto exit;
-       }
+       .tg_listen = nfcsim_tg_listen,
+       .tg_configure_hw = nfcsim_tg_configure_hw,
+       .tg_send_cmd = nfcsim_tg_send_cmd,
 
-       /* This simulates an arbitrary transmission delay between the 2 devices.
-        * If packet transmission occurs immediately between them, we have a
-        * non-stop flow of several tens of thousands SYMM packets per second
-        * and a burning cpu.
-        */
-       queue_delayed_work(wq, &peer->recv_work,
-                       msecs_to_jiffies(dev->rx_delay));
+       .abort_cmd = nfcsim_abort_cmd,
+       .switch_rf = nfcsim_switch_rf,
+};
+
+static struct dentry *nfcsim_debugfs_root;
 
-       mutex_unlock(&peer->lock);
+static void nfcsim_debugfs_init(void)
+{
+       nfcsim_debugfs_root = debugfs_create_dir("nfcsim", NULL);
 
-       err = 0;
-exit:
-       dev_kfree_skb(skb);
+       if (!nfcsim_debugfs_root)
+               pr_err("Could not create debugfs entry\n");
 
-       return err;
 }
 
-static int nfcsim_im_transceive(struct nfc_dev *nfc_dev,
-                               struct nfc_target *target, struct sk_buff *skb,
-                               data_exchange_cb_t cb, void *cb_context)
+static void nfcsim_debugfs_remove(void)
 {
-       return nfcsim_tx(nfc_dev, target, skb, cb, cb_context);
+       debugfs_remove_recursive(nfcsim_debugfs_root);
 }
 
-static int nfcsim_tm_send(struct nfc_dev *nfc_dev, struct sk_buff *skb)
+static void nfcsim_debugfs_init_dev(struct nfcsim *dev)
 {
-       return nfcsim_tx(nfc_dev, NULL, skb, NULL, NULL);
-}
-
-static struct nfc_ops nfcsim_nfc_ops = {
-       .dev_up = nfcsim_dev_up,
-       .dev_down = nfcsim_dev_down,
-       .dep_link_up = nfcsim_dep_link_up,
-       .dep_link_down = nfcsim_dep_link_down,
-       .start_poll = nfcsim_start_poll,
-       .stop_poll = nfcsim_stop_poll,
-       .activate_target = nfcsim_activate_target,
-       .deactivate_target = nfcsim_deactivate_target,
-       .im_transceive = nfcsim_im_transceive,
-       .tm_send = nfcsim_tm_send,
-};
+       struct dentry *dev_dir;
+       char devname[5]; /* nfcX\0 */
+       u32 idx;
+       int n;
 
-static void nfcsim_set_polling_mode(struct nfcsim *dev)
-{
-       if (dev->polling_mode == NFCSIM_POLL_NONE) {
-               dev->curr_polling_mode = NFCSIM_POLL_NONE;
+       if (!nfcsim_debugfs_root) {
+               NFCSIM_ERR(dev, "nfcsim debugfs not initialized\n");
                return;
        }
 
-       if (dev->curr_polling_mode == NFCSIM_POLL_NONE) {
-               if (dev->polling_mode & NFCSIM_POLL_INITIATOR)
-                       dev->curr_polling_mode = NFCSIM_POLL_INITIATOR;
-               else
-                       dev->curr_polling_mode = NFCSIM_POLL_TARGET;
-
+       idx = dev->nfc_digital_dev->nfc_dev->idx;
+       n = snprintf(devname, sizeof(devname), "nfc%d", idx);
+       if (n >= sizeof(devname)) {
+               NFCSIM_ERR(dev, "Could not compute dev name for dev %d\n", idx);
                return;
        }
 
-       if (dev->polling_mode == NFCSIM_POLL_DUAL) {
-               if (dev->curr_polling_mode == NFCSIM_POLL_TARGET)
-                       dev->curr_polling_mode = NFCSIM_POLL_INITIATOR;
-               else
-                       dev->curr_polling_mode = NFCSIM_POLL_TARGET;
+       dev_dir = debugfs_create_dir(devname, nfcsim_debugfs_root);
+       if (!dev_dir) {
+               NFCSIM_ERR(dev, "Could not create debugfs entries for nfc%d\n",
+                          idx);
+               return;
        }
+
+       debugfs_create_u8("dropframe", 0664, dev_dir, &dev->dropframe);
 }
 
-static void nfcsim_wq_poll(struct work_struct *work)
+static struct nfcsim *nfcsim_device_new(struct nfcsim_link *link_in,
+                                       struct nfcsim_link *link_out)
 {
-       struct nfcsim *dev = container_of(work, struct nfcsim, poll_work.work);
-       struct nfcsim *peer = dev->peer_dev;
+       struct nfcsim *dev;
+       int rc;
 
-       /* These work items run on an ordered workqueue and are therefore
-        * serialized. So we can take both mutexes without being dead locked.
-        */
-       mutex_lock(&dev->lock);
-       mutex_lock(&peer->lock);
+       dev = kzalloc(sizeof(struct nfcsim), GFP_KERNEL);
+       if (!dev)
+               return ERR_PTR(-ENOMEM);
 
-       nfcsim_set_polling_mode(dev);
+       INIT_DELAYED_WORK(&dev->send_work, nfcsim_send_wq);
+       INIT_WORK(&dev->recv_work, nfcsim_recv_wq);
 
-       if (dev->curr_polling_mode == NFCSIM_POLL_NONE) {
-               DEV_DBG(dev, "Not polling\n");
-               goto unlock;
+       dev->nfc_digital_dev =
+                       nfc_digital_allocate_device(&nfcsim_digital_ops,
+                                                   NFC_PROTO_NFC_DEP_MASK,
+                                                   NFCSIM_CAPABILITIES,
+                                                   0, 0);
+       if (!dev->nfc_digital_dev) {
+               kfree(dev);
+               return ERR_PTR(-ENOMEM);
        }
 
-       DEV_DBG(dev, "Polling as %s",
-               dev->curr_polling_mode == NFCSIM_POLL_INITIATOR ?
-               "initiator\n" : "target\n");
+       nfc_digital_set_drvdata(dev->nfc_digital_dev, dev);
 
-       if (dev->curr_polling_mode == NFCSIM_POLL_TARGET)
-               goto sched_work;
+       dev->link_in = link_in;
+       dev->link_out = link_out;
 
-       if (peer->curr_polling_mode == NFCSIM_POLL_TARGET) {
-               peer->polling_mode = NFCSIM_POLL_NONE;
-               dev->polling_mode = NFCSIM_POLL_NONE;
-
-               dev->initiator = 1;
-
-               nfcsim_target_found(dev);
+       rc = nfc_digital_register_device(dev->nfc_digital_dev);
+       if (rc) {
+               pr_err("Could not register digital device (%d)\n", rc);
+               nfc_digital_free_device(dev->nfc_digital_dev);
+               kfree(dev);
 
-               goto unlock;
+               return ERR_PTR(rc);
        }
 
-sched_work:
-       /* This defines the delay for an initiator to check if the other device
-        * is polling in target mode.
-        * If the device starts in dual mode polling, it switches between
-        * initiator and target at every round.
-        * Because the wq is ordered and only 1 work item is executed at a time,
-        * we'll always have one device polling as initiator and the other as
-        * target at some point, even if both are started in dual mode.
-        */
-       queue_delayed_work(wq, &dev->poll_work, msecs_to_jiffies(200));
+       nfcsim_debugfs_init_dev(dev);
 
-unlock:
-       mutex_unlock(&peer->lock);
-       mutex_unlock(&dev->lock);
+       return dev;
 }
 
-static struct nfcsim *nfcsim_init_dev(void)
+static void nfcsim_device_free(struct nfcsim *dev)
 {
-       struct nfcsim *dev;
-       int rc = -ENOMEM;
+       nfc_digital_unregister_device(dev->nfc_digital_dev);
 
-       dev = kzalloc(sizeof(*dev), GFP_KERNEL);
-       if (dev == NULL)
-               return ERR_PTR(-ENOMEM);
+       dev->up = false;
 
-       mutex_init(&dev->lock);
+       nfcsim_link_shutdown(dev->link_in);
 
-       INIT_DELAYED_WORK(&dev->recv_work, nfcsim_wq_recv);
-       INIT_DELAYED_WORK(&dev->poll_work, nfcsim_wq_poll);
+       cancel_delayed_work_sync(&dev->send_work);
+       cancel_work_sync(&dev->recv_work);
 
-       dev->nfc_dev = nfc_allocate_device(&nfcsim_nfc_ops,
-                                          NFC_PROTO_NFC_DEP_MASK,
-                                          0, 0);
-       if (!dev->nfc_dev)
-               goto error;
+       nfc_digital_free_device(dev->nfc_digital_dev);
 
-       nfc_set_drvdata(dev->nfc_dev, dev);
-
-       rc = nfc_register_device(dev->nfc_dev);
-       if (rc)
-               goto free_nfc_dev;
-
-       dev->rx_delay = RX_DEFAULT_DELAY;
-       return dev;
-
-free_nfc_dev:
-       nfc_free_device(dev->nfc_dev);
-
-error:
        kfree(dev);
-
-       return ERR_PTR(rc);
 }
 
-static void nfcsim_free_device(struct nfcsim *dev)
-{
-       nfc_unregister_device(dev->nfc_dev);
-
-       nfc_free_device(dev->nfc_dev);
-
-       kfree(dev);
-}
+static struct nfcsim *dev0;
+static struct nfcsim *dev1;
 
 static int __init nfcsim_init(void)
 {
+       struct nfcsim_link *link0, *link1;
        int rc;
 
-       /* We need an ordered wq to ensure that poll_work items are executed
-        * one at a time.
-        */
-       wq = alloc_ordered_workqueue("nfcsim", 0);
-       if (!wq) {
+       link0 = nfcsim_link_new();
+       link1 = nfcsim_link_new();
+       if (!link0 || !link1) {
                rc = -ENOMEM;
-               goto exit;
+               goto exit_err;
        }
 
-       dev0 = nfcsim_init_dev();
+       nfcsim_debugfs_init();
+
+       dev0 = nfcsim_device_new(link0, link1);
        if (IS_ERR(dev0)) {
                rc = PTR_ERR(dev0);
-               goto exit;
+               goto exit_err;
        }
 
-       dev1 = nfcsim_init_dev();
+       dev1 = nfcsim_device_new(link1, link0);
        if (IS_ERR(dev1)) {
-               kfree(dev0);
+               nfcsim_device_free(dev0);
 
                rc = PTR_ERR(dev1);
-               goto exit;
+               goto exit_err;
        }
 
-       dev0->peer_dev = dev1;
-       dev1->peer_dev = dev0;
+       pr_info("nfcsim " NFCSIM_VERSION " initialized\n");
+
+       return 0;
 
-       pr_debug("NFCsim " NFCSIM_VERSION " initialized\n");
+exit_err:
+       pr_err("Failed to initialize nfcsim driver (%d)\n", rc);
 
-       rc = 0;
-exit:
-       if (rc)
-               pr_err("Failed to initialize nfcsim driver (%d)\n",
-                      rc);
+       nfcsim_link_free(link0);
+       nfcsim_link_free(link1);
 
        return rc;
 }
 
 static void __exit nfcsim_exit(void)
 {
-       nfcsim_cleanup_dev(dev0, 1);
-       nfcsim_cleanup_dev(dev1, 1);
+       struct nfcsim_link *link0, *link1;
+
+       link0 = dev0->link_in;
+       link1 = dev0->link_out;
+
+       nfcsim_device_free(dev0);
+       nfcsim_device_free(dev1);
 
-       nfcsim_free_device(dev0);
-       nfcsim_free_device(dev1);
+       nfcsim_link_free(link0);
+       nfcsim_link_free(link1);
 
-       destroy_workqueue(wq);
+       nfcsim_debugfs_remove();
 }
 
 module_init(nfcsim_init);
index f81e500..3fbd18b 100644 (file)
@@ -94,7 +94,7 @@ struct nfcwilink {
        struct nci_dev                  *ndev;
        unsigned long                   flags;
 
-       char                            st_register_cb_status;
+       int                             st_register_cb_status;
        long                            (*st_write) (struct sk_buff *);
 
        struct completion               completed;
@@ -320,7 +320,7 @@ exit:
 }
 
 /* Called by ST when registration is complete */
-static void nfcwilink_register_complete(void *priv_data, char data)
+static void nfcwilink_register_complete(void *priv_data, int data)
 {
        struct nfcwilink *drv = priv_data;
 
index 8ca0603..33ed78b 100644 (file)
@@ -464,10 +464,8 @@ static int pn533_usb_probe(struct usb_interface *interface,
                return -ENOMEM;
 
        in_buf = kzalloc(in_buf_len, GFP_KERNEL);
-       if (!in_buf) {
-               rc = -ENOMEM;
-               goto out_free_phy;
-       }
+       if (!in_buf)
+               return -ENOMEM;
 
        phy->udev = usb_get_dev(interface_to_usbdev(interface));
        phy->interface = interface;
@@ -554,8 +552,7 @@ error:
        usb_free_urb(phy->out_urb);
        usb_put_dev(phy->udev);
        kfree(in_buf);
-out_free_phy:
-       kfree(phy);
+
        return rc;
 }
 
index 87d5099..2b2330b 100644 (file)
@@ -343,7 +343,26 @@ in_protocols[][PORT100_IN_MAX_NUM_PROTOCOLS + 1] = {
        },
        [NFC_DIGITAL_FRAMING_NFCF_NFC_DEP] = {
                /* nfc_digital_framing_nfcf */
-               { PORT100_IN_PROT_END, 0 },
+               { PORT100_IN_PROT_INITIAL_GUARD_TIME,     18 },
+               { PORT100_IN_PROT_ADD_CRC,                 1 },
+               { PORT100_IN_PROT_CHECK_CRC,               1 },
+               { PORT100_IN_PROT_MULTI_CARD,              0 },
+               { PORT100_IN_PROT_ADD_PARITY,              0 },
+               { PORT100_IN_PROT_CHECK_PARITY,            0 },
+               { PORT100_IN_PROT_BITWISE_AC_RECV_MODE,    0 },
+               { PORT100_IN_PROT_VALID_BIT_NUMBER,        8 },
+               { PORT100_IN_PROT_CRYPTO1,                 0 },
+               { PORT100_IN_PROT_ADD_SOF,                 0 },
+               { PORT100_IN_PROT_CHECK_SOF,               0 },
+               { PORT100_IN_PROT_ADD_EOF,                 0 },
+               { PORT100_IN_PROT_CHECK_EOF,               0 },
+               { PORT100_IN_PROT_DEAF_TIME,               4 },
+               { PORT100_IN_PROT_CRM,                     0 },
+               { PORT100_IN_PROT_CRM_MIN_LEN,             0 },
+               { PORT100_IN_PROT_T1_TAG_FRAME,            0 },
+               { PORT100_IN_PROT_RFCA,                    0 },
+               { PORT100_IN_PROT_GUARD_TIME_AT_INITIATOR, 6 },
+               { PORT100_IN_PROT_END,                     0 },
        },
        [NFC_DIGITAL_FRAMING_NFC_DEP_ACTIVATED] = {
                { PORT100_IN_PROT_END, 0 },
@@ -437,6 +456,12 @@ struct port100 {
        struct urb *out_urb;
        struct urb *in_urb;
 
+       /* This mutex protects the out_urb and avoids to submit a new command
+        * through port100_send_frame_async() while the previous one is being
+        * canceled through port100_abort_cmd().
+        */
+       struct mutex out_urb_lock;
+
        struct work_struct cmd_complete_work;
 
        u8 cmd_type;
@@ -445,6 +470,9 @@ struct port100 {
         * for any queuing/locking mechanism at driver level.
         */
        struct port100_cmd *cmd;
+
+       bool cmd_cancel;
+       struct completion cmd_cancel_done;
 };
 
 struct port100_cmd {
@@ -699,10 +727,27 @@ static int port100_send_ack(struct port100 *dev)
 {
        int rc;
 
+       mutex_lock(&dev->out_urb_lock);
+
+       init_completion(&dev->cmd_cancel_done);
+
+       usb_kill_urb(dev->out_urb);
+
        dev->out_urb->transfer_buffer = ack_frame;
        dev->out_urb->transfer_buffer_length = sizeof(ack_frame);
        rc = usb_submit_urb(dev->out_urb, GFP_KERNEL);
 
+       /* Set the cmd_cancel flag only if the URB has been successfully
+        * submitted. It will be reset by the out URB completion callback
+        * port100_send_complete().
+        */
+       dev->cmd_cancel = !rc;
+
+       mutex_unlock(&dev->out_urb_lock);
+
+       if (!rc)
+               wait_for_completion(&dev->cmd_cancel_done);
+
        return rc;
 }
 
@@ -711,6 +756,16 @@ static int port100_send_frame_async(struct port100 *dev, struct sk_buff *out,
 {
        int rc;
 
+       mutex_lock(&dev->out_urb_lock);
+
+       /* A command cancel frame as been sent through dev->out_urb. Don't try
+        * to submit a new one.
+        */
+       if (dev->cmd_cancel) {
+               rc = -EAGAIN;
+               goto exit;
+       }
+
        dev->out_urb->transfer_buffer = out->data;
        dev->out_urb->transfer_buffer_length = out->len;
 
@@ -722,16 +777,15 @@ static int port100_send_frame_async(struct port100 *dev, struct sk_buff *out,
 
        rc = usb_submit_urb(dev->out_urb, GFP_KERNEL);
        if (rc)
-               return rc;
+               goto exit;
 
        rc = port100_submit_urb_for_ack(dev, GFP_KERNEL);
        if (rc)
-               goto error;
+               usb_unlink_urb(dev->out_urb);
 
-       return 0;
+exit:
+       mutex_unlock(&dev->out_urb_lock);
 
-error:
-       usb_unlink_urb(dev->out_urb);
        return rc;
 }
 
@@ -790,6 +844,12 @@ static int port100_send_cmd_async(struct port100 *dev, u8 cmd_code,
                        PORT100_FRAME_MAX_PAYLOAD_LEN +
                        PORT100_FRAME_TAIL_LEN;
 
+       if (dev->cmd) {
+               nfc_err(&dev->interface->dev,
+                       "A command is still in process\n");
+               return -EBUSY;
+       }
+
        resp = alloc_skb(resp_len, GFP_KERNEL);
        if (!resp)
                return -ENOMEM;
@@ -867,6 +927,11 @@ static void port100_send_complete(struct urb *urb)
 {
        struct port100 *dev = urb->context;
 
+       if (dev->cmd_cancel) {
+               dev->cmd_cancel = false;
+               complete(&dev->cmd_cancel_done);
+       }
+
        switch (urb->status) {
        case 0:
                break; /* success */
@@ -985,6 +1050,10 @@ static int port100_switch_rf(struct nfc_digital_dev *ddev, bool on)
 
        *skb_put(skb, 1) = on ? 1 : 0;
 
+       /* Cancel the last command if the device is being switched off */
+       if (!on)
+               port100_abort_cmd(ddev);
+
        resp = port100_send_cmd_sync(dev, PORT100_CMD_SWITCH_RF, skb);
 
        if (IS_ERR(resp))
@@ -1430,6 +1499,7 @@ static int port100_probe(struct usb_interface *interface,
        if (!dev)
                return -ENOMEM;
 
+       mutex_init(&dev->out_urb_lock);
        dev->udev = usb_get_dev(interface_to_usbdev(interface));
        dev->interface = interface;
        usb_set_intfdata(interface, dev);
index 10842b7..26c9dbb 100644 (file)
@@ -1048,6 +1048,10 @@ static int trf7970a_init(struct trf7970a *trf)
        if (ret)
                goto err_out;
 
+       ret = trf7970a_write(trf, TRF7970A_NFC_TARGET_LEVEL, 0);
+       if (ret)
+               goto err_out;
+
        usleep_range(1000, 2000);
 
        trf->chip_status_ctrl &= ~TRF7970A_CHIP_STATUS_RF_ON;
index 1a51584..d5fb55c 100644 (file)
@@ -1394,19 +1394,22 @@ static int ns_cmp(void *priv, struct list_head *a, struct list_head *b)
        return nsa->ns_id - nsb->ns_id;
 }
 
-static struct nvme_ns *nvme_find_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+static struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 {
-       struct nvme_ns *ns;
-
-       lockdep_assert_held(&ctrl->namespaces_mutex);
+       struct nvme_ns *ns, *ret = NULL;
 
+       mutex_lock(&ctrl->namespaces_mutex);
        list_for_each_entry(ns, &ctrl->namespaces, list) {
-               if (ns->ns_id == nsid)
-                       return ns;
+               if (ns->ns_id == nsid) {
+                       kref_get(&ns->kref);
+                       ret = ns;
+                       break;
+               }
                if (ns->ns_id > nsid)
                        break;
        }
-       return NULL;
+       mutex_unlock(&ctrl->namespaces_mutex);
+       return ret;
 }
 
 static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
@@ -1415,8 +1418,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
        struct gendisk *disk;
        int node = dev_to_node(ctrl->dev);
 
-       lockdep_assert_held(&ctrl->namespaces_mutex);
-
        ns = kzalloc_node(sizeof(*ns), GFP_KERNEL, node);
        if (!ns)
                return;
@@ -1457,7 +1458,10 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
        if (nvme_revalidate_disk(ns->disk))
                goto out_free_disk;
 
-       list_add_tail_rcu(&ns->list, &ctrl->namespaces);
+       mutex_lock(&ctrl->namespaces_mutex);
+       list_add_tail(&ns->list, &ctrl->namespaces);
+       mutex_unlock(&ctrl->namespaces_mutex);
+
        kref_get(&ctrl->kref);
        if (ns->type == NVME_NS_LIGHTNVM)
                return;
@@ -1480,8 +1484,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 
 static void nvme_ns_remove(struct nvme_ns *ns)
 {
-       lockdep_assert_held(&ns->ctrl->namespaces_mutex);
-
        if (test_and_set_bit(NVME_NS_REMOVING, &ns->flags))
                return;
 
@@ -1494,8 +1496,11 @@ static void nvme_ns_remove(struct nvme_ns *ns)
                blk_mq_abort_requeue_list(ns->queue);
                blk_cleanup_queue(ns->queue);
        }
+
+       mutex_lock(&ns->ctrl->namespaces_mutex);
        list_del_init(&ns->list);
-       synchronize_rcu();
+       mutex_unlock(&ns->ctrl->namespaces_mutex);
+
        nvme_put_ns(ns);
 }
 
@@ -1503,10 +1508,11 @@ static void nvme_validate_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 {
        struct nvme_ns *ns;
 
-       ns = nvme_find_ns(ctrl, nsid);
+       ns = nvme_find_get_ns(ctrl, nsid);
        if (ns) {
                if (revalidate_disk(ns->disk))
                        nvme_ns_remove(ns);
+               nvme_put_ns(ns);
        } else
                nvme_alloc_ns(ctrl, nsid);
 }
@@ -1535,9 +1541,11 @@ static int nvme_scan_ns_list(struct nvme_ctrl *ctrl, unsigned nn)
                        nvme_validate_ns(ctrl, nsid);
 
                        while (++prev < nsid) {
-                               ns = nvme_find_ns(ctrl, prev);
-                               if (ns)
+                               ns = nvme_find_get_ns(ctrl, prev);
+                               if (ns) {
                                        nvme_ns_remove(ns);
+                                       nvme_put_ns(ns);
+                               }
                        }
                }
                nn -= j;
@@ -1552,8 +1560,6 @@ static void nvme_scan_ns_sequential(struct nvme_ctrl *ctrl, unsigned nn)
        struct nvme_ns *ns, *next;
        unsigned i;
 
-       lockdep_assert_held(&ctrl->namespaces_mutex);
-
        for (i = 1; i <= nn; i++)
                nvme_validate_ns(ctrl, i);
 
@@ -1576,7 +1582,6 @@ static void nvme_scan_work(struct work_struct *work)
        if (nvme_identify_ctrl(ctrl, &id))
                return;
 
-       mutex_lock(&ctrl->namespaces_mutex);
        nn = le32_to_cpu(id->nn);
        if (ctrl->vs >= NVME_VS(1, 1) &&
            !(ctrl->quirks & NVME_QUIRK_IDENTIFY_CNS)) {
@@ -1585,6 +1590,7 @@ static void nvme_scan_work(struct work_struct *work)
        }
        nvme_scan_ns_sequential(ctrl, nn);
  done:
+       mutex_lock(&ctrl->namespaces_mutex);
        list_sort(NULL, &ctrl->namespaces, ns_cmp);
        mutex_unlock(&ctrl->namespaces_mutex);
        kfree(id);
@@ -1604,6 +1610,11 @@ void nvme_queue_scan(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvme_queue_scan);
 
+/*
+ * This function iterates the namespace list unlocked to allow recovery from
+ * controller failure. It is up to the caller to ensure the namespace list is
+ * not modified by scan work while this function is executing.
+ */
 void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
 {
        struct nvme_ns *ns, *next;
@@ -1617,10 +1628,8 @@ void nvme_remove_namespaces(struct nvme_ctrl *ctrl)
        if (ctrl->state == NVME_CTRL_DEAD)
                nvme_kill_queues(ctrl);
 
-       mutex_lock(&ctrl->namespaces_mutex);
        list_for_each_entry_safe(ns, next, &ctrl->namespaces, list)
                nvme_ns_remove(ns);
-       mutex_unlock(&ctrl->namespaces_mutex);
 }
 EXPORT_SYMBOL_GPL(nvme_remove_namespaces);
 
@@ -1791,11 +1800,8 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
 {
        struct nvme_ns *ns;
 
-       rcu_read_lock();
-       list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
-               if (!kref_get_unless_zero(&ns->kref))
-                       continue;
-
+       mutex_lock(&ctrl->namespaces_mutex);
+       list_for_each_entry(ns, &ctrl->namespaces, list) {
                /*
                 * Revalidating a dead namespace sets capacity to 0. This will
                 * end buffered writers dirtying pages that can't be synced.
@@ -1806,10 +1812,8 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
                blk_set_queue_dying(ns->queue);
                blk_mq_abort_requeue_list(ns->queue);
                blk_mq_start_stopped_hw_queues(ns->queue, true);
-
-               nvme_put_ns(ns);
        }
-       rcu_read_unlock();
+       mutex_unlock(&ctrl->namespaces_mutex);
 }
 EXPORT_SYMBOL_GPL(nvme_kill_queues);
 
@@ -1817,8 +1821,8 @@ void nvme_stop_queues(struct nvme_ctrl *ctrl)
 {
        struct nvme_ns *ns;
 
-       rcu_read_lock();
-       list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
+       mutex_lock(&ctrl->namespaces_mutex);
+       list_for_each_entry(ns, &ctrl->namespaces, list) {
                spin_lock_irq(ns->queue->queue_lock);
                queue_flag_set(QUEUE_FLAG_STOPPED, ns->queue);
                spin_unlock_irq(ns->queue->queue_lock);
@@ -1826,7 +1830,7 @@ void nvme_stop_queues(struct nvme_ctrl *ctrl)
                blk_mq_cancel_requeue_work(ns->queue);
                blk_mq_stop_hw_queues(ns->queue);
        }
-       rcu_read_unlock();
+       mutex_unlock(&ctrl->namespaces_mutex);
 }
 EXPORT_SYMBOL_GPL(nvme_stop_queues);
 
@@ -1834,13 +1838,13 @@ void nvme_start_queues(struct nvme_ctrl *ctrl)
 {
        struct nvme_ns *ns;
 
-       rcu_read_lock();
-       list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
+       mutex_lock(&ctrl->namespaces_mutex);
+       list_for_each_entry(ns, &ctrl->namespaces, list) {
                queue_flag_clear_unlocked(QUEUE_FLAG_STOPPED, ns->queue);
                blk_mq_start_stopped_hw_queues(ns->queue, true);
                blk_mq_kick_requeue_list(ns->queue);
        }
-       rcu_read_unlock();
+       mutex_unlock(&ctrl->namespaces_mutex);
 }
 EXPORT_SYMBOL_GPL(nvme_start_queues);
 
index e2b50bc..b470f7e 100644 (file)
@@ -19,6 +19,7 @@
 #include <linux/of_gpio.h>
 #include <linux/of_irq.h>
 #include <linux/of_mdio.h>
+#include <linux/of_net.h>
 #include <linux/module.h>
 
 MODULE_AUTHOR("Grant Likely <grant.likely@secretlab.ca>");
@@ -331,6 +332,41 @@ struct phy_device *of_phy_connect(struct net_device *dev,
 }
 EXPORT_SYMBOL(of_phy_connect);
 
+/**
+ * of_phy_get_and_connect
+ * - Get phy node and connect to the phy described in the device tree
+ * @dev: pointer to net_device claiming the phy
+ * @np: Pointer to device tree node for the net_device claiming the phy
+ * @hndlr: Link state callback for the network device
+ *
+ * If successful, returns a pointer to the phy_device with the embedded
+ * struct device refcount incremented by one, or NULL on failure. The
+ * refcount must be dropped by calling phy_disconnect() or phy_detach().
+ */
+struct phy_device *of_phy_get_and_connect(struct net_device *dev,
+                                         struct device_node *np,
+                                         void (*hndlr)(struct net_device *))
+{
+       phy_interface_t iface;
+       struct device_node *phy_np;
+       struct phy_device *phy;
+
+       iface = of_get_phy_mode(np);
+       if (iface < 0)
+               return NULL;
+
+       phy_np = of_parse_phandle(np, "phy-handle", 0);
+       if (!phy_np)
+               return NULL;
+
+       phy = of_phy_connect(dev, phy_np, hndlr, 0, iface);
+
+       of_node_put(phy_np);
+
+       return phy;
+}
+EXPORT_SYMBOL(of_phy_get_and_connect);
+
 /**
  * of_phy_attach - Attach to a PHY without starting the state machine
  * @dev: pointer to net_device claiming the phy
index 38a8bbe..83797d8 100644 (file)
@@ -195,7 +195,7 @@ static void parport_detach(struct parport *port)
        struct pps_client_pp *device;
 
        /* FIXME: oooh, this is ugly! */
-       if (strcmp(pardev->name, KBUILD_MODNAME))
+       if (!pardev || strcmp(pardev->name, KBUILD_MODNAME))
                /* not our port */
                return;
 
index 526bf23..6c7fe47 100644 (file)
@@ -152,7 +152,6 @@ static const struct regulator_ops rpm_smps_ldo_ops_fixed = {
        .enable = rpm_reg_enable,
        .disable = rpm_reg_disable,
        .is_enabled = rpm_reg_is_enabled,
-       .list_voltage = regulator_list_voltage_linear_range,
 
        .get_voltage = rpm_reg_get_voltage,
        .set_voltage = rpm_reg_set_voltage,
index d6a691e..d6803a9 100644 (file)
@@ -10093,6 +10093,7 @@ static int ipr_probe_ioa(struct pci_dev *pdev,
                ioa_cfg->intr_flag = IPR_USE_MSI;
        else {
                ioa_cfg->intr_flag = IPR_USE_LSI;
+               ioa_cfg->clear_isr = 1;
                ioa_cfg->nvectors = 1;
                dev_info(&pdev->dev, "Cannot enable MSI.\n");
        }
index 5649c20..a92a62d 100644 (file)
@@ -2548,7 +2548,7 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha,
        if (!vha->flags.online)
                return;
 
-       if (rsp->msix->cpuid != smp_processor_id()) {
+       if (rsp->msix && rsp->msix->cpuid != smp_processor_id()) {
                /* if kernel does not notify qla of IRQ's CPU change,
                 * then set it here.
                 */
index ff41c31..eaccd65 100644 (file)
@@ -429,7 +429,7 @@ static struct scsi_dev_info_list *scsi_dev_info_list_find(const char *vendor,
         * here, and we don't know what device it is
         * trying to work with, leave it as-is.
         */
-       vmax = 8;       /* max length of vendor */
+       vmax = sizeof(devinfo->vendor);
        vskip = vendor;
        while (vmax > 0 && *vskip == ' ') {
                vmax--;
@@ -439,7 +439,7 @@ static struct scsi_dev_info_list *scsi_dev_info_list_find(const char *vendor,
        while (vmax > 0 && vskip[vmax - 1] == ' ')
                --vmax;
 
-       mmax = 16;      /* max length of model */
+       mmax = sizeof(devinfo->model);
        mskip = model;
        while (mmax > 0 && *mskip == ' ') {
                mmax--;
@@ -455,10 +455,12 @@ static struct scsi_dev_info_list *scsi_dev_info_list_find(const char *vendor,
                         * Behave like the older version of get_device_flags.
                         */
                        if (memcmp(devinfo->vendor, vskip, vmax) ||
-                                       devinfo->vendor[vmax])
+                                       (vmax < sizeof(devinfo->vendor) &&
+                                               devinfo->vendor[vmax]))
                                continue;
                        if (memcmp(devinfo->model, mskip, mmax) ||
-                                       devinfo->model[mmax])
+                                       (mmax < sizeof(devinfo->model) &&
+                                               devinfo->model[mmax]))
                                continue;
                        return devinfo;
                } else {
index f973bfc..1e93a37 100644 (file)
@@ -366,34 +366,22 @@ static void to_utf8(struct vc_data *vc, uint c)
 
 static void do_compute_shiftstate(void)
 {
-       unsigned int i, j, k, sym, val;
+       unsigned int k, sym, val;
 
        shift_state = 0;
        memset(shift_down, 0, sizeof(shift_down));
 
-       for (i = 0; i < ARRAY_SIZE(key_down); i++) {
-
-               if (!key_down[i])
+       for_each_set_bit(k, key_down, min(NR_KEYS, KEY_CNT)) {
+               sym = U(key_maps[0][k]);
+               if (KTYP(sym) != KT_SHIFT && KTYP(sym) != KT_SLOCK)
                        continue;
 
-               k = i * BITS_PER_LONG;
-
-               for (j = 0; j < BITS_PER_LONG; j++, k++) {
-
-                       if (!test_bit(k, key_down))
-                               continue;
+               val = KVAL(sym);
+               if (val == KVAL(K_CAPSSHIFT))
+                       val = KVAL(K_SHIFT);
 
-                       sym = U(key_maps[0][k]);
-                       if (KTYP(sym) != KT_SHIFT && KTYP(sym) != KT_SLOCK)
-                               continue;
-
-                       val = KVAL(sym);
-                       if (val == KVAL(K_CAPSSHIFT))
-                               val = KVAL(K_SHIFT);
-
-                       shift_down[val]++;
-                       shift_state |= (1 << val);
-               }
+               shift_down[val]++;
+               shift_state |= BIT(val);
        }
 }
 
index 076970a..4ce10bc 100644 (file)
@@ -423,36 +423,7 @@ upload:
 
        return 0;
 }
-static int __init check_prereq(void)
-{
-       struct cpuinfo_x86 *c = &cpu_data(0);
-
-       if (!xen_initial_domain())
-               return -ENODEV;
-
-       if (!acpi_gbl_FADT.smi_command)
-               return -ENODEV;
-
-       if (c->x86_vendor == X86_VENDOR_INTEL) {
-               if (!cpu_has(c, X86_FEATURE_EST))
-                       return -ENODEV;
 
-               return 0;
-       }
-       if (c->x86_vendor == X86_VENDOR_AMD) {
-               /* Copied from powernow-k8.h, can't include ../cpufreq/powernow
-                * as we get compile warnings for the static functions.
-                */
-#define CPUID_FREQ_VOLT_CAPABILITIES    0x80000007
-#define USE_HW_PSTATE                   0x00000080
-               u32 eax, ebx, ecx, edx;
-               cpuid(CPUID_FREQ_VOLT_CAPABILITIES, &eax, &ebx, &ecx, &edx);
-               if ((edx & USE_HW_PSTATE) != USE_HW_PSTATE)
-                       return -ENODEV;
-               return 0;
-       }
-       return -ENODEV;
-}
 /* acpi_perf_data is a pointer to percpu data. */
 static struct acpi_processor_performance __percpu *acpi_perf_data;
 
@@ -509,10 +480,10 @@ struct notifier_block xen_acpi_processor_resume_nb = {
 static int __init xen_acpi_processor_init(void)
 {
        unsigned int i;
-       int rc = check_prereq();
+       int rc;
 
-       if (rc)
-               return rc;
+       if (!xen_initial_domain())
+               return -ENODEV;
 
        nr_acpi_bits = get_max_acpi_id() + 1;
        acpi_ids_done = kcalloc(BITS_TO_LONGS(nr_acpi_bits), sizeof(unsigned long), GFP_KERNEL);
index cacf30d..7487971 100644 (file)
@@ -316,11 +316,18 @@ static int xenbus_write_transaction(unsigned msg_type,
                        rc = -ENOMEM;
                        goto out;
                }
+       } else {
+               list_for_each_entry(trans, &u->transactions, list)
+                       if (trans->handle.id == u->u.msg.tx_id)
+                               break;
+               if (&trans->list == &u->transactions)
+                       return -ESRCH;
        }
 
        reply = xenbus_dev_request_and_reply(&u->u.msg);
        if (IS_ERR(reply)) {
-               kfree(trans);
+               if (msg_type == XS_TRANSACTION_START)
+                       kfree(trans);
                rc = PTR_ERR(reply);
                goto out;
        }
@@ -333,12 +340,7 @@ static int xenbus_write_transaction(unsigned msg_type,
                        list_add(&trans->list, &u->transactions);
                }
        } else if (u->u.msg.type == XS_TRANSACTION_END) {
-               list_for_each_entry(trans, &u->transactions, list)
-                       if (trans->handle.id == u->u.msg.tx_id)
-                               break;
-               BUG_ON(&trans->list == &u->transactions);
                list_del(&trans->list);
-
                kfree(trans);
        }
 
index 374b12a..22f7cd7 100644 (file)
@@ -232,10 +232,10 @@ static void transaction_resume(void)
 void *xenbus_dev_request_and_reply(struct xsd_sockmsg *msg)
 {
        void *ret;
-       struct xsd_sockmsg req_msg = *msg;
+       enum xsd_sockmsg_type type = msg->type;
        int err;
 
-       if (req_msg.type == XS_TRANSACTION_START)
+       if (type == XS_TRANSACTION_START)
                transaction_start();
 
        mutex_lock(&xs_state.request_mutex);
@@ -249,12 +249,8 @@ void *xenbus_dev_request_and_reply(struct xsd_sockmsg *msg)
 
        mutex_unlock(&xs_state.request_mutex);
 
-       if (IS_ERR(ret))
-               return ret;
-
        if ((msg->type == XS_TRANSACTION_END) ||
-           ((req_msg.type == XS_TRANSACTION_START) &&
-            (msg->type == XS_ERROR)))
+           ((type == XS_TRANSACTION_START) && (msg->type == XS_ERROR)))
                transaction_end();
 
        return ret;
index f4645c5..e2e7c74 100644 (file)
@@ -853,7 +853,7 @@ v9fs_vfs_atomic_open(struct inode *dir, struct dentry *dentry,
        struct p9_fid *fid, *inode_fid;
        struct dentry *res = NULL;
 
-       if (d_unhashed(dentry)) {
+       if (d_in_lookup(dentry)) {
                res = v9fs_vfs_lookup(dir, dentry, 0);
                if (IS_ERR(res))
                        return PTR_ERR(res);
index a34702c..1b51eaa 100644 (file)
@@ -254,7 +254,7 @@ v9fs_vfs_atomic_open_dotl(struct inode *dir, struct dentry *dentry,
        struct posix_acl *pacl = NULL, *dacl = NULL;
        struct dentry *res = NULL;
 
-       if (d_unhashed(dentry)) {
+       if (d_in_lookup(dentry)) {
                res = v9fs_vfs_lookup(dir, dentry, 0);
                if (IS_ERR(res))
                        return PTR_ERR(res);
index ce2f579..0daaf7c 100644 (file)
@@ -394,7 +394,7 @@ int ceph_atomic_open(struct inode *dir, struct dentry *dentry,
        if ((flags & O_CREAT) && !req->r_reply_info.head->is_dentry)
                err = ceph_handle_notrace_create(dir, dentry);
 
-       if (d_unhashed(dentry)) {
+       if (d_in_lookup(dentry)) {
                dn = ceph_finish_lookup(req, dentry, err);
                if (IS_ERR(dn))
                        err = PTR_ERR(dn);
index c3eb998..fb0903f 100644 (file)
@@ -445,7 +445,7 @@ cifs_atomic_open(struct inode *inode, struct dentry *direntry,
                 * Check for hashed negative dentry. We have already revalidated
                 * the dentry and it is fine. No need to perform another lookup.
                 */
-               if (!d_unhashed(direntry))
+               if (!d_in_lookup(direntry))
                        return -ENOENT;
 
                res = cifs_lookup(inode, direntry, 0);
index 33b7ee3..bbc1252 100644 (file)
@@ -357,8 +357,6 @@ configfs_write_bin_file(struct file *file, const char __user *buf,
 
        len = simple_write_to_buffer(buffer->bin_buffer,
                        buffer->bin_buffer_size, ppos, buf, count);
-       if (len > 0)
-               *ppos += len;
 out:
        mutex_unlock(&buffer->mutex);
        return len;
index 0d8eb34..e5e29f8 100644 (file)
@@ -45,7 +45,7 @@
  * ecryptfs_to_hex
  * @dst: Buffer to take hex character representation of contents of
  *       src; must be at least of size (src_size * 2)
- * @src: Buffer to be converted to a hex string respresentation
+ * @src: Buffer to be converted to a hex string representation
  * @src_size: number of bytes to convert
  */
 void ecryptfs_to_hex(char *dst, char *src, size_t src_size)
@@ -60,7 +60,7 @@ void ecryptfs_to_hex(char *dst, char *src, size_t src_size)
  * ecryptfs_from_hex
  * @dst: Buffer to take the bytes from src hex; must be at least of
  *       size (src_size / 2)
- * @src: Buffer to be converted from a hex string respresentation to raw value
+ * @src: Buffer to be converted from a hex string representation to raw value
  * @dst_size: size of dst buffer, or number of hex characters pairs to convert
  */
 void ecryptfs_from_hex(char *dst, char *src, int dst_size)
@@ -953,7 +953,7 @@ struct ecryptfs_cipher_code_str_map_elem {
 };
 
 /* Add support for additional ciphers by adding elements here. The
- * cipher_code is whatever OpenPGP applicatoins use to identify the
+ * cipher_code is whatever OpenPGP applications use to identify the
  * ciphers. List in order of probability. */
 static struct ecryptfs_cipher_code_str_map_elem
 ecryptfs_cipher_code_str_map[] = {
@@ -1410,7 +1410,7 @@ int ecryptfs_read_and_validate_xattr_region(struct dentry *dentry,
  *
  * Common entry point for reading file metadata. From here, we could
  * retrieve the header information from the header region of the file,
- * the xattr region of the file, or some other repostory that is
+ * the xattr region of the file, or some other repository that is
  * stored separately from the file itself. The current implementation
  * supports retrieving the metadata information from the file contents
  * and from the xattr region.
index 7000b96..ca4e837 100644 (file)
@@ -169,9 +169,22 @@ out:
        return rc;
 }
 
+static int ecryptfs_mmap(struct file *file, struct vm_area_struct *vma)
+{
+       struct file *lower_file = ecryptfs_file_to_lower(file);
+       /*
+        * Don't allow mmap on top of file systems that don't support it
+        * natively.  If FILESYSTEM_MAX_STACK_DEPTH > 2 or ecryptfs
+        * allows recursive mounting, this will need to be extended.
+        */
+       if (!lower_file->f_op->mmap)
+               return -ENODEV;
+       return generic_file_mmap(file, vma);
+}
+
 /**
  * ecryptfs_open
- * @inode: inode speciying file to open
+ * @inode: inode specifying file to open
  * @file: Structure to return filled in
  *
  * Opens the file specified by inode.
@@ -240,7 +253,7 @@ out:
 
 /**
  * ecryptfs_dir_open
- * @inode: inode speciying file to open
+ * @inode: inode specifying file to open
  * @file: Structure to return filled in
  *
  * Opens the file specified by inode.
@@ -403,7 +416,7 @@ const struct file_operations ecryptfs_main_fops = {
 #ifdef CONFIG_COMPAT
        .compat_ioctl = ecryptfs_compat_ioctl,
 #endif
-       .mmap = generic_file_mmap,
+       .mmap = ecryptfs_mmap,
        .open = ecryptfs_open,
        .flush = ecryptfs_flush,
        .release = ecryptfs_release,
index e818f5a..866bb18 100644 (file)
@@ -25,7 +25,6 @@
 #include <linux/slab.h>
 #include <linux/wait.h>
 #include <linux/mount.h>
-#include <linux/file.h>
 #include "ecryptfs_kernel.h"
 
 struct ecryptfs_open_req {
@@ -148,7 +147,7 @@ int ecryptfs_privileged_open(struct file **lower_file,
        flags |= IS_RDONLY(d_inode(lower_dentry)) ? O_RDONLY : O_RDWR;
        (*lower_file) = dentry_open(&req.path, flags, cred);
        if (!IS_ERR(*lower_file))
-               goto have_file;
+               goto out;
        if ((flags & O_ACCMODE) == O_RDONLY) {
                rc = PTR_ERR((*lower_file));
                goto out;
@@ -166,16 +165,8 @@ int ecryptfs_privileged_open(struct file **lower_file,
        mutex_unlock(&ecryptfs_kthread_ctl.mux);
        wake_up(&ecryptfs_kthread_ctl.wait);
        wait_for_completion(&req.done);
-       if (IS_ERR(*lower_file)) {
+       if (IS_ERR(*lower_file))
                rc = PTR_ERR(*lower_file);
-               goto out;
-       }
-have_file:
-       if ((*lower_file)->f_op->mmap == NULL) {
-               fput(*lower_file);
-               *lower_file = NULL;
-               rc = -EMEDIUMTYPE;
-       }
 out:
        return rc;
 }
index 1698132..6120044 100644 (file)
@@ -738,8 +738,7 @@ static void ecryptfs_free_kmem_caches(void)
                struct ecryptfs_cache_info *info;
 
                info = &ecryptfs_cache_infos[i];
-               if (*(info->cache))
-                       kmem_cache_destroy(*(info->cache));
+               kmem_cache_destroy(*(info->cache));
        }
 }
 
index 989a2ce..fe7e83a 100644 (file)
@@ -483,9 +483,9 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
                goto out_free;
        }
        inode->i_state |= I_WB_SWITCH;
+       __iget(inode);
        spin_unlock(&inode->i_lock);
 
-       ihold(inode);
        isw->inode = inode;
 
        atomic_inc(&isw_nr_in_flight);
index 264f07c..cca7b04 100644 (file)
@@ -480,7 +480,7 @@ static int fuse_atomic_open(struct inode *dir, struct dentry *entry,
        struct fuse_conn *fc = get_fuse_conn(dir);
        struct dentry *res = NULL;
 
-       if (d_unhashed(entry)) {
+       if (d_in_lookup(entry)) {
                res = fuse_lookup(dir, entry, 0);
                if (IS_ERR(res))
                        return PTR_ERR(res);
index 21dc784..9bad79f 100644 (file)
@@ -1189,7 +1189,7 @@ static int gfs2_atomic_open(struct inode *dir, struct dentry *dentry,
        struct dentry *d;
        bool excl = !!(flags & O_EXCL);
 
-       if (!d_unhashed(dentry))
+       if (!d_in_lookup(dentry))
                goto skip_lookup;
 
        d = __gfs2_lookup(dir, dentry, file, opened);
index d8015a0..19d93d0 100644 (file)
@@ -1485,11 +1485,13 @@ int nfs_atomic_open(struct inode *dir, struct dentry *dentry,
                    struct file *file, unsigned open_flags,
                    umode_t mode, int *opened)
 {
+       DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
        struct nfs_open_context *ctx;
        struct dentry *res;
        struct iattr attr = { .ia_valid = ATTR_OPEN };
        struct inode *inode;
        unsigned int lookup_flags = 0;
+       bool switched = false;
        int err;
 
        /* Expect a negative dentry */
@@ -1504,7 +1506,7 @@ int nfs_atomic_open(struct inode *dir, struct dentry *dentry,
 
        /* NFS only supports OPEN on regular files */
        if ((open_flags & O_DIRECTORY)) {
-               if (!d_unhashed(dentry)) {
+               if (!d_in_lookup(dentry)) {
                        /*
                         * Hashed negative dentry with O_DIRECTORY: dentry was
                         * revalidated and is fine, no need to perform lookup
@@ -1528,6 +1530,17 @@ int nfs_atomic_open(struct inode *dir, struct dentry *dentry,
                attr.ia_size = 0;
        }
 
+       if (!(open_flags & O_CREAT) && !d_in_lookup(dentry)) {
+               d_drop(dentry);
+               switched = true;
+               dentry = d_alloc_parallel(dentry->d_parent,
+                                         &dentry->d_name, &wq);
+               if (IS_ERR(dentry))
+                       return PTR_ERR(dentry);
+               if (unlikely(!d_in_lookup(dentry)))
+                       return finish_no_open(file, dentry);
+       }
+
        ctx = create_nfs_open_context(dentry, open_flags);
        err = PTR_ERR(ctx);
        if (IS_ERR(ctx))
@@ -1563,14 +1576,23 @@ int nfs_atomic_open(struct inode *dir, struct dentry *dentry,
        trace_nfs_atomic_open_exit(dir, ctx, open_flags, err);
        put_nfs_open_context(ctx);
 out:
+       if (unlikely(switched)) {
+               d_lookup_done(dentry);
+               dput(dentry);
+       }
        return err;
 
 no_open:
        res = nfs_lookup(dir, dentry, lookup_flags);
-       err = PTR_ERR(res);
+       if (switched) {
+               d_lookup_done(dentry);
+               if (!res)
+                       res = dentry;
+               else
+                       dput(dentry);
+       }
        if (IS_ERR(res))
-               goto out;
-
+               return PTR_ERR(res);
        return finish_no_open(file, res);
 }
 EXPORT_SYMBOL_GPL(nfs_atomic_open);
index c2a6b08..5c9d2d8 100644 (file)
@@ -505,6 +505,7 @@ static int ovl_remove_and_whiteout(struct dentry *dentry, bool is_dir)
        struct dentry *upper;
        struct dentry *opaquedir = NULL;
        int err;
+       int flags = 0;
 
        if (WARN_ON(!workdir))
                return -EROFS;
@@ -534,46 +535,39 @@ static int ovl_remove_and_whiteout(struct dentry *dentry, bool is_dir)
        if (err)
                goto out_dput;
 
-       whiteout = ovl_whiteout(workdir, dentry);
-       err = PTR_ERR(whiteout);
-       if (IS_ERR(whiteout))
+       upper = lookup_one_len(dentry->d_name.name, upperdir,
+                              dentry->d_name.len);
+       err = PTR_ERR(upper);
+       if (IS_ERR(upper))
                goto out_unlock;
 
-       upper = ovl_dentry_upper(dentry);
-       if (!upper) {
-               upper = lookup_one_len(dentry->d_name.name, upperdir,
-                                      dentry->d_name.len);
-               err = PTR_ERR(upper);
-               if (IS_ERR(upper))
-                       goto kill_whiteout;
-
-               err = ovl_do_rename(wdir, whiteout, udir, upper, 0);
-               dput(upper);
-               if (err)
-                       goto kill_whiteout;
-       } else {
-               int flags = 0;
+       err = -ESTALE;
+       if ((opaquedir && upper != opaquedir) ||
+           (!opaquedir && ovl_dentry_upper(dentry) &&
+            upper != ovl_dentry_upper(dentry))) {
+               goto out_dput_upper;
+       }
 
-               if (opaquedir)
-                       upper = opaquedir;
-               err = -ESTALE;
-               if (upper->d_parent != upperdir)
-                       goto kill_whiteout;
+       whiteout = ovl_whiteout(workdir, dentry);
+       err = PTR_ERR(whiteout);
+       if (IS_ERR(whiteout))
+               goto out_dput_upper;
 
-               if (is_dir)
-                       flags |= RENAME_EXCHANGE;
+       if (d_is_dir(upper))
+               flags = RENAME_EXCHANGE;
 
-               err = ovl_do_rename(wdir, whiteout, udir, upper, flags);
-               if (err)
-                       goto kill_whiteout;
+       err = ovl_do_rename(wdir, whiteout, udir, upper, flags);
+       if (err)
+               goto kill_whiteout;
+       if (flags)
+               ovl_cleanup(wdir, upper);
 
-               if (is_dir)
-                       ovl_cleanup(wdir, upper);
-       }
        ovl_dentry_version_inc(dentry->d_parent);
 out_d_drop:
        d_drop(dentry);
        dput(whiteout);
+out_dput_upper:
+       dput(upper);
 out_unlock:
        unlock_rename(workdir, upperdir);
 out_dput:
index c831c2e..d1cdc60 100644 (file)
@@ -80,6 +80,9 @@ int ovl_setattr(struct dentry *dentry, struct iattr *attr)
                                goto out_drop_write;
                }
 
+               if (attr->ia_valid & (ATTR_KILL_SUID|ATTR_KILL_SGID))
+                       attr->ia_valid &= ~ATTR_MODE;
+
                inode_lock(upperdentry->d_inode);
                err = notify_change(upperdentry, attr, NULL);
                if (!err)
@@ -410,12 +413,11 @@ struct inode *ovl_new_inode(struct super_block *sb, umode_t mode,
        if (!inode)
                return NULL;
 
-       mode &= S_IFMT;
-
        inode->i_ino = get_next_ino();
        inode->i_mode = mode;
        inode->i_flags |= S_NOATIME | S_NOCMTIME;
 
+       mode &= S_IFMT;
        switch (mode) {
        case S_IFDIR:
                inode->i_private = oe;
index 4bd9b5b..cfbca53 100644 (file)
@@ -187,6 +187,7 @@ static inline void ovl_copyattr(struct inode *from, struct inode *to)
 {
        to->i_uid = from->i_uid;
        to->i_gid = from->i_gid;
+       to->i_mode = from->i_mode;
 }
 
 /* dir.c */
index dbca737..63a6ff2 100644 (file)
@@ -1575,6 +1575,12 @@ xfs_ioc_swapext(
                goto out_put_tmp_file;
        }
 
+       if (f.file->f_op != &xfs_file_operations ||
+           tmp.file->f_op != &xfs_file_operations) {
+               error = -EINVAL;
+               goto out_put_tmp_file;
+       }
+
        ip = XFS_I(file_inode(f.file));
        tip = XFS_I(file_inode(tmp.file));
 
index 797ae2e..29c6912 100644 (file)
@@ -78,6 +78,7 @@
 
 /* ACPI PCI Interrupt Link (pci_link.c) */
 
+int acpi_irq_penalty_init(void);
 int acpi_pci_link_allocate_irq(acpi_handle handle, int index, int *triggering,
                               int *polarity, char **name);
 int acpi_pci_link_free_irq(acpi_handle handle);
index 4e4c214..1ff3a76 100644 (file)
@@ -192,7 +192,7 @@ ACPI_INIT_GLOBAL(u8, acpi_gbl_do_not_use_xsdt, FALSE);
 /*
  * Optionally support group module level code.
  */
-ACPI_INIT_GLOBAL(u8, acpi_gbl_group_module_level_code, FALSE);
+ACPI_INIT_GLOBAL(u8, acpi_gbl_group_module_level_code, TRUE);
 
 /*
  * Optionally use 32-bit FADT addresses if and when there is a conflict
index 6a67ab9..081d0f2 100644 (file)
 
 #define INIT_TEXT                                                      \
        *(.init.text)                                                   \
+       *(.text.startup)                                                \
        MEM_DISCARD(init.text)
 
 #define EXIT_DATA                                                      \
        *(.exit.data)                                                   \
+       *(.fini_array)                                                  \
+       *(.dtors)                                                       \
        MEM_DISCARD(exit.data)                                          \
        MEM_DISCARD(exit.rodata)
 
 #define EXIT_TEXT                                                      \
        *(.exit.text)                                                   \
+       *(.text.exit)                                                   \
        MEM_DISCARD(exit.text)
 
 #define EXIT_CALL                                                      \
index c801d90..4cecb0b 100644 (file)
@@ -316,6 +316,20 @@ ttm_bo_reference(struct ttm_buffer_object *bo)
  */
 extern int ttm_bo_wait(struct ttm_buffer_object *bo,
                       bool interruptible, bool no_wait);
+
+/**
+ * ttm_bo_mem_compat - Check if proposed placement is compatible with a bo
+ *
+ * @placement:  Return immediately if buffer is busy.
+ * @mem:  The struct ttm_mem_reg indicating the region where the bo resides
+ * @new_flags: Describes compatible placement found
+ *
+ * Returns true if the placement is compatible
+ */
+extern bool ttm_bo_mem_compat(struct ttm_placement *placement,
+                             struct ttm_mem_reg *mem,
+                             uint32_t *new_flags);
+
 /**
  * ttm_bo_validate
  *
index e6b41f4..3db25df 100644 (file)
@@ -159,6 +159,7 @@ struct bcma_host_ops {
 #define BCMA_CORE_DEFAULT              0xFFF
 
 #define BCMA_MAX_NR_CORES              16
+#define BCMA_CORE_SIZE                 0x1000
 
 /* Chip IDs of PCIe devices */
 #define BCMA_CHIP_ID_BCM4313   0x4313
index b3336b4..36da074 100644 (file)
@@ -209,7 +209,12 @@ u64 bpf_get_stackid(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
 bool bpf_prog_array_compatible(struct bpf_array *array, const struct bpf_prog *fp);
 
 const struct bpf_func_proto *bpf_get_trace_printk_proto(void);
-const struct bpf_func_proto *bpf_get_event_output_proto(void);
+
+typedef unsigned long (*bpf_ctx_copy_t)(void *dst, const void *src,
+                                       unsigned long len);
+
+u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+                    void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy);
 
 #ifdef CONFIG_BPF_SYSCALL
 DECLARE_PER_CPU(int, bpf_prog_active);
@@ -219,6 +224,7 @@ void bpf_register_map_type(struct bpf_map_type_list *tl);
 
 struct bpf_prog *bpf_prog_get(u32 ufd);
 struct bpf_prog *bpf_prog_get_type(u32 ufd, enum bpf_prog_type type);
+struct bpf_prog *bpf_prog_add(struct bpf_prog *prog, int i);
 struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog);
 void bpf_prog_put(struct bpf_prog *prog);
 
@@ -283,6 +289,10 @@ static inline struct bpf_prog *bpf_prog_get_type(u32 ufd,
 {
        return ERR_PTR(-EOPNOTSUPP);
 }
+static inline struct bpf_prog *bpf_prog_add(struct bpf_prog *prog, int i)
+{
+       return ERR_PTR(-EOPNOTSUPP);
+}
 
 static inline void bpf_prog_put(struct bpf_prog *prog)
 {
index 6fc31ef..a16439b 100644 (file)
@@ -368,6 +368,11 @@ struct bpf_skb_data_end {
        void *data_end;
 };
 
+struct xdp_buff {
+       void *data;
+       void *data_end;
+};
+
 /* compute the linear packet data range [data, data_end) which
  * will be accessed by cls_bpf and act_bpf programs
  */
@@ -429,6 +434,18 @@ static inline u32 bpf_prog_run_clear_cb(const struct bpf_prog *prog,
        return BPF_PROG_RUN(prog, skb);
 }
 
+static inline u32 bpf_prog_run_xdp(const struct bpf_prog *prog,
+                                  struct xdp_buff *xdp)
+{
+       u32 ret;
+
+       rcu_read_lock();
+       ret = BPF_PROG_RUN(prog, (void *)xdp);
+       rcu_read_unlock();
+
+       return ret;
+}
+
 static inline unsigned int bpf_prog_size(unsigned int proglen)
 {
        return max(sizeof(struct bpf_prog),
@@ -467,7 +484,11 @@ static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
 }
 #endif /* CONFIG_DEBUG_SET_MODULE_RONX */
 
-int sk_filter(struct sock *sk, struct sk_buff *skb);
+int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap);
+static inline int sk_filter(struct sock *sk, struct sk_buff *skb)
+{
+       return sk_filter_trim_cap(sk, skb, 1);
+}
 
 struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err);
 void bpf_prog_free(struct bpf_prog *fp);
@@ -509,6 +530,7 @@ bool bpf_helper_changes_skb_data(void *func);
 
 struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
                                       const struct bpf_insn *patch, u32 len);
+void bpf_warn_invalid_xdp_action(u32 act);
 
 #ifdef CONFIG_BPF_JIT
 extern int bpf_jit_enable;
index 419fb9e..f0a7a03 100644 (file)
@@ -94,7 +94,7 @@ static inline int split_huge_page(struct page *page)
 void deferred_split_huge_page(struct page *page);
 
 void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
-               unsigned long address, bool freeze);
+               unsigned long address, bool freeze, struct page *page);
 
 #define split_huge_pmd(__vma, __pmd, __address)                                \
        do {                                                            \
@@ -102,7 +102,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
                if (pmd_trans_huge(*____pmd)                            \
                                        || pmd_devmap(*____pmd))        \
                        __split_huge_pmd(__vma, __pmd, __address,       \
-                                               false);                 \
+                                               false, NULL);           \
        }  while (0)
 
 
index acedbb6..ddb8901 100644 (file)
@@ -31,6 +31,8 @@
 #define IEEE802154_MIN_PSDU_LEN                9
 #define IEEE802154_FCS_LEN             2
 #define IEEE802154_MAX_AUTH_TAG_LEN    16
+#define IEEE802154_FC_LEN              2
+#define IEEE802154_SEQ_LEN             1
 
 /*  General MAC frame format:
  *  2 bytes: Frame Control
@@ -48,6 +50,7 @@
 
 #define IEEE802154_EXTENDED_ADDR_LEN   8
 #define IEEE802154_SHORT_ADDR_LEN      2
+#define IEEE802154_PAN_ID_LEN          2
 
 #define IEEE802154_LIFS_PERIOD         40
 #define IEEE802154_SIFS_PERIOD         12
@@ -221,9 +224,17 @@ enum {
 #define IEEE802154_FCTL_ACKREQ         0x0020
 #define IEEE802154_FCTL_SECEN          0x0004
 #define IEEE802154_FCTL_INTRA_PAN      0x0040
+#define IEEE802154_FCTL_DADDR          0x0c00
+#define IEEE802154_FCTL_SADDR          0xc000
 
 #define IEEE802154_FTYPE_DATA          0x0001
 
+#define IEEE802154_FCTL_ADDR_NONE      0x0000
+#define IEEE802154_FCTL_DADDR_SHORT    0x0800
+#define IEEE802154_FCTL_DADDR_EXTENDED 0x0c00
+#define IEEE802154_FCTL_SADDR_SHORT    0x8000
+#define IEEE802154_FCTL_SADDR_EXTENDED 0xc000
+
 /*
  * ieee802154_is_data - check if type is IEEE802154_FTYPE_DATA
  * @fc: frame control bytes in little-endian byteorder
@@ -261,6 +272,24 @@ static inline bool ieee802154_is_intra_pan(__le16 fc)
        return fc & cpu_to_le16(IEEE802154_FCTL_INTRA_PAN);
 }
 
+/*
+ * ieee802154_daddr_mode - get daddr mode from fc
+ * @fc: frame control bytes in little-endian byteorder
+ */
+static inline __le16 ieee802154_daddr_mode(__le16 fc)
+{
+       return fc & cpu_to_le16(IEEE802154_FCTL_DADDR);
+}
+
+/*
+ * ieee802154_saddr_mode - get saddr mode from fc
+ * @fc: frame control bytes in little-endian byteorder
+ */
+static inline __le16 ieee802154_saddr_mode(__le16 fc)
+{
+       return fc & cpu_to_le16(IEEE802154_FCTL_SADDR);
+}
+
 /**
  * ieee802154_is_valid_psdu_len - check if psdu len is valid
  * available lengths:
index a805474..56e6069 100644 (file)
@@ -97,6 +97,11 @@ enum mem_cgroup_events_target {
 #define MEM_CGROUP_ID_SHIFT    16
 #define MEM_CGROUP_ID_MAX      USHRT_MAX
 
+struct mem_cgroup_id {
+       int id;
+       atomic_t ref;
+};
+
 struct mem_cgroup_stat_cpu {
        long count[MEMCG_NR_STAT];
        unsigned long events[MEMCG_NR_EVENTS];
@@ -172,6 +177,9 @@ enum memcg_kmem_state {
 struct mem_cgroup {
        struct cgroup_subsys_state css;
 
+       /* Private memcg ID. Used to ID objects that outlive the cgroup */
+       struct mem_cgroup_id id;
+
        /* Accounted resources */
        struct page_counter memory;
        struct page_counter swap;
@@ -330,22 +338,9 @@ static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg)
        if (mem_cgroup_disabled())
                return 0;
 
-       return memcg->css.id;
-}
-
-/**
- * mem_cgroup_from_id - look up a memcg from an id
- * @id: the id to look up
- *
- * Caller must hold rcu_read_lock() and use css_tryget() as necessary.
- */
-static inline struct mem_cgroup *mem_cgroup_from_id(unsigned short id)
-{
-       struct cgroup_subsys_state *css;
-
-       css = css_from_id(id, &memory_cgrp_subsys);
-       return mem_cgroup_from_css(css);
+       return memcg->id.id;
 }
+struct mem_cgroup *mem_cgroup_from_id(unsigned short id);
 
 /**
  * parent_mem_cgroup - find the accounting parent of a memcg
index 2e5b194..257173e 100644 (file)
@@ -37,6 +37,7 @@
 
 /* struct phy_device dev_flags definitions */
 #define MICREL_PHY_50MHZ_CLK   0x00000001
+#define MICREL_PHY_FXEN                0x00000002
 
 #define MICREL_KSZ9021_EXTREG_CTRL     0xB
 #define MICREL_KSZ9021_EXTREG_DATA_WRITE       0xC
index 587cdf9..deaa221 100644 (file)
@@ -291,16 +291,18 @@ enum {
        MLX4_WQE_CTRL_FORCE_LOOPBACK    = 1 << 0,
 };
 
+union mlx4_wqe_qpn_vlan {
+       struct {
+               __be16  vlan_tag;
+               u8      ins_vlan;
+               u8      fence_size;
+       };
+       __be32          bf_qpn;
+};
+
 struct mlx4_wqe_ctrl_seg {
        __be32                  owner_opcode;
-       union {
-               struct {
-                       __be16                  vlan_tag;
-                       u8                      ins_vlan;
-                       u8                      fence_size;
-               };
-               __be32                  bf_qpn;
-       };
+       union mlx4_wqe_qpn_vlan qpn_vlan;
        /*
         * High 24 bits are SRC remote buffer; low 8 bits are flags:
         * [7]   SO (strong ordering)
index 81e8396..a041b99 100644 (file)
@@ -469,7 +469,7 @@ struct mlx5_irq_info {
 };
 
 struct mlx5_fc_stats {
-       struct list_head list;
+       struct rb_root counters;
        struct list_head addlist;
        /* protect addlist add/splice operations */
        spinlock_t addlist_lock;
index 152421c..d671e4e 100644 (file)
@@ -893,7 +893,10 @@ struct mlx5_ifc_cmd_hca_cap_bits {
        u8         reserved_at_330[0xb];
        u8         log_max_xrcd[0x5];
 
-       u8         reserved_at_340[0x20];
+       u8         reserved_at_340[0x8];
+       u8         log_max_flow_counter_bulk[0x8];
+       u8         max_flow_counter[0x10];
+
 
        u8         reserved_at_360[0x3];
        u8         log_max_rq[0x5];
@@ -980,7 +983,8 @@ struct mlx5_ifc_dest_format_struct_bits {
 };
 
 struct mlx5_ifc_flow_counter_list_bits {
-       u8         reserved_at_0[0x10];
+       u8         clear[0x1];
+       u8         num_of_counters[0xf];
        u8         flow_counter_id[0x10];
 
        u8         reserved_at_20[0x20];
index bf9b322..d351fd3 100644 (file)
@@ -104,6 +104,7 @@ struct mfc_cache {
                        unsigned long bytes;
                        unsigned long pkt;
                        unsigned long wrong_if;
+                       unsigned long lastuse;
                        unsigned char ttls[MAXVIFS];    /* TTL thresholds               */
                } res;
        } mfc_un;
index 66982e7..3987b64 100644 (file)
@@ -92,6 +92,7 @@ struct mfc6_cache {
                        unsigned long bytes;
                        unsigned long pkt;
                        unsigned long wrong_if;
+                       unsigned long lastuse;
                        unsigned char ttls[MAXMIFS];    /* TTL thresholds               */
                } res;
        } mfc_un;
index 49736a3..43c749b 100644 (file)
@@ -63,6 +63,7 @@ struct wpan_dev;
 struct mpls_dev;
 /* UDP Tunnel offloads */
 struct udp_tunnel_info;
+struct bpf_prog;
 
 void netdev_set_default_ethtool_ops(struct net_device *dev,
                                    const struct ethtool_ops *ops);
@@ -799,6 +800,33 @@ struct tc_to_netdev {
        };
 };
 
+/* These structures hold the attributes of xdp state that are being passed
+ * to the netdevice through the xdp op.
+ */
+enum xdp_netdev_command {
+       /* Set or clear a bpf program used in the earliest stages of packet
+        * rx. The prog will have been loaded as BPF_PROG_TYPE_XDP. The callee
+        * is responsible for calling bpf_prog_put on any old progs that are
+        * stored. In case of error, the callee need not release the new prog
+        * reference, but on success it takes ownership and must bpf_prog_put
+        * when it is no longer used.
+        */
+       XDP_SETUP_PROG,
+       /* Check if a bpf program is set on the device.  The callee should
+        * return true if a program is currently attached and running.
+        */
+       XDP_QUERY_PROG,
+};
+
+struct netdev_xdp {
+       enum xdp_netdev_command command;
+       union {
+               /* XDP_SETUP_PROG */
+               struct bpf_prog *prog;
+               /* XDP_QUERY_PROG */
+               bool prog_attached;
+       };
+};
 
 /*
  * This structure defines the management hooks for network devices.
@@ -1087,6 +1115,9 @@ struct tc_to_netdev {
  *     appropriate rx headroom value allows avoiding skb head copy on
  *     forward. Setting a negative value resets the rx headroom to the
  *     default value.
+ * int (*ndo_xdp)(struct net_device *dev, struct netdev_xdp *xdp);
+ *     This function is used to set or query state related to XDP on the
+ *     netdevice. See definition of enum xdp_netdev_command for details.
  *
  */
 struct net_device_ops {
@@ -1271,6 +1302,8 @@ struct net_device_ops {
                                                       struct sk_buff *skb);
        void                    (*ndo_set_rx_headroom)(struct net_device *dev,
                                                       int needed_headroom);
+       int                     (*ndo_xdp)(struct net_device *dev,
+                                          struct netdev_xdp *xdp);
 };
 
 /**
@@ -3257,6 +3290,7 @@ int dev_get_phys_port_id(struct net_device *dev,
 int dev_get_phys_port_name(struct net_device *dev,
                           char *name, size_t len);
 int dev_change_proto_down(struct net_device *dev, bool proto_down);
+int dev_change_xdp_fd(struct net_device *dev, int fd);
 struct sk_buff *validate_xmit_skb_list(struct sk_buff *skb, struct net_device *dev);
 struct sk_buff *dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
                                    struct netdev_queue *txq, int *ret);
@@ -4175,6 +4209,13 @@ static inline void netif_keep_dst(struct net_device *dev)
        dev->priv_flags &= ~(IFF_XMIT_DST_RELEASE | IFF_XMIT_DST_RELEASE_PERM);
 }
 
+/* return true if dev can't cope with mtu frames that need vlan tag insertion */
+static inline bool netif_reduces_vlan_mtu(struct net_device *dev)
+{
+       /* TODO: reserve and use an additional IFF bit, if we get more users */
+       return dev->priv_flags & IFF_MACSEC;
+}
+
 extern struct pernet_operations __net_initdata loopback_net_ops;
 
 /* Logging, debugging and troubleshooting/diagnostic helpers. */
index 4b04587..2ab2336 100644 (file)
@@ -19,6 +19,9 @@ extern struct phy_device *of_phy_connect(struct net_device *dev,
                                         struct device_node *phy_np,
                                         void (*hndlr)(struct net_device *),
                                         u32 flags, phy_interface_t iface);
+extern struct phy_device *
+of_phy_get_and_connect(struct net_device *dev, struct device_node *np,
+                      void (*hndlr)(struct net_device *));
 struct phy_device *of_phy_attach(struct net_device *dev,
                                 struct device_node *phy_np, u32 flags,
                                 phy_interface_t iface);
@@ -52,6 +55,13 @@ static inline struct phy_device *of_phy_connect(struct net_device *dev,
        return NULL;
 }
 
+static inline struct phy_device *
+of_phy_get_and_connect(struct net_device *dev, struct device_node *np,
+                      void (*hndlr)(struct net_device *))
+{
+       return NULL;
+}
+
 static inline struct phy_device *of_phy_attach(struct net_device *dev,
                                               struct device_node *phy_np,
                                               u32 flags, phy_interface_t iface)
index 1a827ce..e79e6c6 100644 (file)
@@ -69,9 +69,22 @@ struct perf_callchain_entry_ctx {
        bool                        contexts_maxed;
 };
 
+typedef unsigned long (*perf_copy_f)(void *dst, const void *src,
+                                    unsigned long len);
+
+struct perf_raw_frag {
+       union {
+               struct perf_raw_frag    *next;
+               unsigned long           pad;
+       };
+       perf_copy_f                     copy;
+       void                            *data;
+       u32                             size;
+} __packed;
+
 struct perf_raw_record {
+       struct perf_raw_frag            frag;
        u32                             size;
-       void                            *data;
 };
 
 /*
@@ -1283,6 +1296,11 @@ extern void perf_restore_debug_store(void);
 static inline void perf_restore_debug_store(void)                      { }
 #endif
 
+static __always_inline bool perf_raw_frag_last(const struct perf_raw_frag *frag)
+{
+       return frag->pad < sizeof(u64);
+}
+
 #define perf_output_put(handle, x) perf_output_copy((handle), &(x), sizeof(x))
 
 /*
index 5b5a80c..c818772 100644 (file)
@@ -43,10 +43,8 @@ struct posix_acl_entry {
 };
 
 struct posix_acl {
-       union {
-               atomic_t                a_refcount;
-               struct rcu_head         a_rcu;
-       };
+       atomic_t                a_refcount;
+       struct rcu_head         a_rcu;
        unsigned int            a_count;
        struct posix_acl_entry  a_entries[0];
 };
index cb4b7e8..eca6f62 100644 (file)
@@ -407,6 +407,7 @@ static inline __must_check
 void **radix_tree_iter_retry(struct radix_tree_iter *iter)
 {
        iter->next_index = iter->index;
+       iter->tags = 0;
        return NULL;
 }
 
index b690009..e585018 100644 (file)
@@ -76,6 +76,8 @@ extern struct rb_node *rb_next_postorder(const struct rb_node *);
 /* Fast replacement of a single node without remove/rebalance/add/rebalance */
 extern void rb_replace_node(struct rb_node *victim, struct rb_node *new,
                            struct rb_root *root);
+extern void rb_replace_node_rcu(struct rb_node *victim, struct rb_node *new,
+                               struct rb_root *root);
 
 static inline void rb_link_node(struct rb_node *node, struct rb_node *parent,
                                struct rb_node **rb_link)
index 14d7b83..d076183 100644 (file)
@@ -130,6 +130,19 @@ __rb_change_child(struct rb_node *old, struct rb_node *new,
                WRITE_ONCE(root->rb_node, new);
 }
 
+static inline void
+__rb_change_child_rcu(struct rb_node *old, struct rb_node *new,
+                     struct rb_node *parent, struct rb_root *root)
+{
+       if (parent) {
+               if (parent->rb_left == old)
+                       rcu_assign_pointer(parent->rb_left, new);
+               else
+                       rcu_assign_pointer(parent->rb_right, new);
+       } else
+               rcu_assign_pointer(root->rb_node, new);
+}
+
 extern void __rb_erase_color(struct rb_node *parent, struct rb_root *root,
        void (*augment_rotate)(struct rb_node *old, struct rb_node *new));
 
index 5f1533e..85830e6 100644 (file)
@@ -611,6 +611,12 @@ static inline void rcu_preempt_sleep_check(void)
        rcu_dereference_sparse(p, space); \
        ((typeof(*p) __force __kernel *)(p)); \
 })
+#define rcu_dereference_raw(p) \
+({ \
+       /* Dependency order vs. p above. */ \
+       typeof(p) ________p1 = lockless_dereference(p); \
+       ((typeof(*p) __force __kernel *)(________p1)); \
+})
 
 /**
  * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
@@ -729,8 +735,6 @@ static inline void rcu_preempt_sleep_check(void)
        __rcu_dereference_check((p), (c) || rcu_read_lock_sched_held(), \
                                __rcu)
 
-#define rcu_dereference_raw(p) rcu_dereference_check(p, 1) /*@@@ needed? @@@*/
-
 /*
  * The tracing infrastructure traces RCU (we want that), but unfortunately
  * some of the RCU checks causes tracing to lock up the system.
index 49eb4f8..2b0fad8 100644 (file)
@@ -158,7 +158,7 @@ struct anon_vma *page_get_anon_vma(struct page *page);
 /*
  * rmap interfaces called when adding or removing pte of page
  */
-void page_move_anon_rmap(struct page *, struct vm_area_struct *, unsigned long);
+void page_move_anon_rmap(struct page *, struct vm_area_struct *);
 void page_add_anon_rmap(struct page *, struct vm_area_struct *,
                unsigned long, bool);
 void do_page_add_anon_rmap(struct page *, struct vm_area_struct *,
index 0a0d568..f229302 100644 (file)
@@ -71,7 +71,7 @@ struct st_proto_s {
        enum proto_type type;
        long (*recv) (void *, struct sk_buff *);
        unsigned char (*match_packet) (const unsigned char *data);
-       void (*reg_complete_cb) (void *, char data);
+       void (*reg_complete_cb) (void *, int data);
        long (*write) (struct sk_buff *skb);
        void *priv_data;
 
index eefcf3e..003b252 100644 (file)
@@ -65,7 +65,7 @@
 #define HCI_I2C                8
 
 /* HCI controller types */
-#define HCI_BREDR      0x00
+#define HCI_PRIMARY    0x00
 #define HCI_AMP                0x01
 
 /* First BR/EDR Controller shall have ID = 0 */
@@ -445,6 +445,7 @@ enum {
 /* ---- HCI Error Codes ---- */
 #define HCI_ERROR_UNKNOWN_CONN_ID      0x02
 #define HCI_ERROR_AUTH_FAILURE         0x05
+#define HCI_ERROR_PIN_OR_KEY_MISSING   0x06
 #define HCI_ERROR_MEMORY_EXCEEDED      0x07
 #define HCI_ERROR_CONNECTION_TIMEOUT   0x08
 #define HCI_ERROR_REJ_LIMITED_RESOURCES        0x0d
index dc71473..ee7fc47 100644 (file)
@@ -372,6 +372,8 @@ struct hci_dev {
 
        atomic_t                promisc;
 
+       const char              *hw_info;
+       const char              *fw_info;
        struct dentry           *debugfs;
 
        struct device           dev;
@@ -654,6 +656,7 @@ enum {
        HCI_CONN_PARAM_REMOVAL_PEND,
        HCI_CONN_NEW_LINK_KEY,
        HCI_CONN_SCANNING,
+       HCI_CONN_AUTH_FAILURE,
 };
 
 static inline bool hci_conn_ssp_enabled(struct hci_conn *conn)
@@ -1021,6 +1024,10 @@ void hci_unregister_dev(struct hci_dev *hdev);
 int hci_suspend_dev(struct hci_dev *hdev);
 int hci_resume_dev(struct hci_dev *hdev);
 int hci_reset_dev(struct hci_dev *hdev);
+int hci_recv_frame(struct hci_dev *hdev, struct sk_buff *skb);
+int hci_recv_diag(struct hci_dev *hdev, struct sk_buff *skb);
+void hci_set_hw_info(struct hci_dev *hdev, const char *fmt, ...);
+void hci_set_fw_info(struct hci_dev *hdev, const char *fmt, ...);
 int hci_dev_open(__u16 dev);
 int hci_dev_close(__u16 dev);
 int hci_dev_do_close(struct hci_dev *hdev);
@@ -1097,9 +1104,6 @@ int hci_remove_adv_instance(struct hci_dev *hdev, u8 instance);
 
 void hci_event_packet(struct hci_dev *hdev, struct sk_buff *skb);
 
-int hci_recv_frame(struct hci_dev *hdev, struct sk_buff *skb);
-int hci_recv_diag(struct hci_dev *hdev, struct sk_buff *skb);
-
 void hci_init_sysfs(struct hci_dev *hdev);
 void hci_conn_init_sysfs(struct hci_conn *conn);
 void hci_conn_add_sysfs(struct hci_conn *conn);
index ea73e08..7647964 100644 (file)
@@ -645,6 +645,7 @@ struct mgmt_ev_device_connected {
 #define MGMT_DEV_DISCONN_TIMEOUT       0x01
 #define MGMT_DEV_DISCONN_LOCAL_HOST    0x02
 #define MGMT_DEV_DISCONN_REMOTE                0x03
+#define MGMT_DEV_DISCONN_AUTH_FAILURE  0x04
 
 #define MGMT_EV_DEVICE_DISCONNECTED    0x000C
 struct mgmt_ev_device_disconnected {
index 171cd76..795ca40 100644 (file)
@@ -219,9 +219,22 @@ struct wpan_phy {
 
        struct device dev;
 
+       /* the network namespace this phy lives in currently */
+       possible_net_t _net;
+
        char priv[0] __aligned(NETDEV_ALIGN);
 };
 
+static inline struct net *wpan_phy_net(struct wpan_phy *wpan_phy)
+{
+       return read_pnet(&wpan_phy->_net);
+}
+
+static inline void wpan_phy_net_set(struct wpan_phy *wpan_phy, struct net *net)
+{
+       write_pnet(&wpan_phy->_net, net);
+}
+
 struct ieee802154_addr {
        u8 mode;
        __le16 pan_id;
index 20b3087..2217a3f 100644 (file)
@@ -32,6 +32,8 @@ enum dsa_tag_protocol {
 #define DSA_MAX_SWITCHES       4
 #define DSA_MAX_PORTS          12
 
+#define DSA_RTABLE_NONE                -1
+
 struct dsa_chip_data {
        /*
         * How to access the switch configuration registers.
@@ -139,6 +141,7 @@ struct dsa_switch_tree {
 struct dsa_port {
        struct net_device       *netdev;
        struct device_node      *dn;
+       unsigned int            ageing_time;
 };
 
 struct dsa_switch {
@@ -327,6 +330,7 @@ struct dsa_switch_driver {
        /*
         * Bridge integration
         */
+       int     (*set_ageing_time)(struct dsa_switch *ds, unsigned int msecs);
        int     (*port_bridge_join)(struct dsa_switch *ds, int port,
                                    struct net_device *bridge);
        void    (*port_bridge_leave)(struct dsa_switch *ds, int port);
index cf6c745..d15214d 100644 (file)
@@ -14,27 +14,26 @@ struct gro_cells {
        struct gro_cell __percpu        *cells;
 };
 
-static inline void gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
+static inline int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
 {
        struct gro_cell *cell;
        struct net_device *dev = skb->dev;
 
-       if (!gcells->cells || skb_cloned(skb) || !(dev->features & NETIF_F_GRO)) {
-               netif_rx(skb);
-               return;
-       }
+       if (!gcells->cells || skb_cloned(skb) || !(dev->features & NETIF_F_GRO))
+               return netif_rx(skb);
 
        cell = this_cpu_ptr(gcells->cells);
 
        if (skb_queue_len(&cell->napi_skbs) > netdev_max_backlog) {
                atomic_long_inc(&dev->rx_dropped);
                kfree_skb(skb);
-               return;
+               return NET_RX_DROP;
        }
 
        __skb_queue_tail(&cell->napi_skbs, skb);
        if (skb_queue_len(&cell->napi_skbs) == 1)
                napi_schedule(&cell->napi);
+       return NET_RX_SUCCESS;
 }
 
 /* called under BH context */
index 08f36cd..9742b92 100644 (file)
@@ -47,6 +47,7 @@ struct inet_skb_parm {
 #define IPSKB_REROUTED         BIT(4)
 #define IPSKB_DOREDIRECT       BIT(5)
 #define IPSKB_FRAG_PMTU                BIT(6)
+#define IPSKB_FRAG_SEGS                BIT(7)
 
        u16                     frag_max_size;
 };
index e465c85..286824a 100644 (file)
@@ -247,14 +247,123 @@ struct ieee802154_ops {
  */
 static inline __le16 ieee802154_get_fc_from_skb(const struct sk_buff *skb)
 {
+       __le16 fc;
+
        /* check if we can fc at skb_mac_header of sk buffer */
-       if (unlikely(!skb_mac_header_was_set(skb) ||
-                    (skb_tail_pointer(skb) - skb_mac_header(skb)) < 2)) {
-               WARN_ON(1);
+       if (WARN_ON(!skb_mac_header_was_set(skb) ||
+                   (skb_tail_pointer(skb) -
+                    skb_mac_header(skb)) < IEEE802154_FC_LEN))
                return cpu_to_le16(0);
+
+       memcpy(&fc, skb_mac_header(skb), IEEE802154_FC_LEN);
+       return fc;
+}
+
+/**
+ * ieee802154_skb_dst_pan - get the pointer to destination pan field
+ * @fc: mac header frame control field
+ * @skb: skb where the destination pan pointer will be get from
+ */
+static inline unsigned char *ieee802154_skb_dst_pan(__le16 fc,
+                                                   const struct sk_buff *skb)
+{
+       unsigned char *dst_pan;
+
+       switch (ieee802154_daddr_mode(fc)) {
+       case cpu_to_le16(IEEE802154_FCTL_ADDR_NONE):
+               dst_pan = NULL;
+               break;
+       case cpu_to_le16(IEEE802154_FCTL_DADDR_SHORT):
+       case cpu_to_le16(IEEE802154_FCTL_DADDR_EXTENDED):
+               dst_pan = skb_mac_header(skb) +
+                         IEEE802154_FC_LEN +
+                         IEEE802154_SEQ_LEN;
+               break;
+       default:
+               WARN_ONCE(1, "invalid addr mode detected");
+               dst_pan = NULL;
+               break;
+       }
+
+       return dst_pan;
+}
+
+/**
+ * ieee802154_skb_src_pan - get the pointer to source pan field
+ * @fc: mac header frame control field
+ * @skb: skb where the source pan pointer will be get from
+ */
+static inline unsigned char *ieee802154_skb_src_pan(__le16 fc,
+                                                   const struct sk_buff *skb)
+{
+       unsigned char *src_pan;
+
+       switch (ieee802154_saddr_mode(fc)) {
+       case cpu_to_le16(IEEE802154_FCTL_ADDR_NONE):
+               src_pan = NULL;
+               break;
+       case cpu_to_le16(IEEE802154_FCTL_SADDR_SHORT):
+       case cpu_to_le16(IEEE802154_FCTL_SADDR_EXTENDED):
+               /* if intra-pan and source addr mode is non none,
+                * then source pan id is equal destination pan id.
+                */
+               if (ieee802154_is_intra_pan(fc)) {
+                       src_pan = ieee802154_skb_dst_pan(fc, skb);
+                       break;
+               }
+
+               switch (ieee802154_daddr_mode(fc)) {
+               case cpu_to_le16(IEEE802154_FCTL_ADDR_NONE):
+                       src_pan = skb_mac_header(skb) +
+                                 IEEE802154_FC_LEN +
+                                 IEEE802154_SEQ_LEN;
+                       break;
+               case cpu_to_le16(IEEE802154_FCTL_DADDR_SHORT):
+                       src_pan = skb_mac_header(skb) +
+                                 IEEE802154_FC_LEN +
+                                 IEEE802154_SEQ_LEN +
+                                 IEEE802154_PAN_ID_LEN +
+                                 IEEE802154_SHORT_ADDR_LEN;
+                       break;
+               case cpu_to_le16(IEEE802154_FCTL_DADDR_EXTENDED):
+                       src_pan = skb_mac_header(skb) +
+                                 IEEE802154_FC_LEN +
+                                 IEEE802154_SEQ_LEN +
+                                 IEEE802154_PAN_ID_LEN +
+                                 IEEE802154_EXTENDED_ADDR_LEN;
+                       break;
+               default:
+                       WARN_ONCE(1, "invalid addr mode detected");
+                       src_pan = NULL;
+                       break;
+               }
+               break;
+       default:
+               WARN_ONCE(1, "invalid addr mode detected");
+               src_pan = NULL;
+               break;
        }
 
-       return get_unaligned_le16(skb_mac_header(skb));
+       return src_pan;
+}
+
+/**
+ * ieee802154_skb_is_intra_pan_addressing - checks whenever the mac addressing
+ *     is an intra pan communication
+ * @fc: mac header frame control field
+ * @skb: skb where the source and destination pan should be get from
+ */
+static inline bool ieee802154_skb_is_intra_pan_addressing(__le16 fc,
+                                                         const struct sk_buff *skb)
+{
+       unsigned char *dst_pan = ieee802154_skb_dst_pan(fc, skb),
+                     *src_pan = ieee802154_skb_src_pan(fc, skb);
+
+       /* if one is NULL is no intra pan addressing */
+       if (!dst_pan || !src_pan)
+               return false;
+
+       return !memcmp(dst_pan, src_pan, IEEE802154_PAN_ID_LEN);
 }
 
 /**
diff --git a/include/net/ncsi.h b/include/net/ncsi.h
new file mode 100644 (file)
index 0000000..1dbf42f
--- /dev/null
@@ -0,0 +1,52 @@
+#ifndef __NET_NCSI_H
+#define __NET_NCSI_H
+
+/*
+ * The NCSI device states seen from external. More NCSI device states are
+ * only visible internally (in net/ncsi/internal.h). When the NCSI device
+ * is registered, it's in ncsi_dev_state_registered state. The state
+ * ncsi_dev_state_start is used to drive to choose active package and
+ * channel. After that, its state is changed to ncsi_dev_state_functional.
+ *
+ * The state ncsi_dev_state_stop helps to shut down the currently active
+ * package and channel while ncsi_dev_state_config helps to reconfigure
+ * them.
+ */
+enum {
+       ncsi_dev_state_registered       = 0x0000,
+       ncsi_dev_state_functional       = 0x0100,
+       ncsi_dev_state_probe            = 0x0200,
+       ncsi_dev_state_config           = 0x0300,
+       ncsi_dev_state_suspend          = 0x0400,
+};
+
+struct ncsi_dev {
+       int               state;
+       int               link_up;
+       struct net_device *dev;
+       void              (*handler)(struct ncsi_dev *ndev);
+};
+
+#ifdef CONFIG_NET_NCSI
+struct ncsi_dev *ncsi_register_dev(struct net_device *dev,
+                                  void (*notifier)(struct ncsi_dev *nd));
+int ncsi_start_dev(struct ncsi_dev *nd);
+void ncsi_unregister_dev(struct ncsi_dev *nd);
+#else /* !CONFIG_NET_NCSI */
+static inline struct ncsi_dev *ncsi_register_dev(struct net_device *dev,
+                                       void (*notifier)(struct ncsi_dev *nd))
+{
+       return NULL;
+}
+
+static inline int ncsi_start_dev(struct ncsi_dev *nd)
+{
+       return -ENOTTY;
+}
+
+static inline void ncsi_unregister_dev(struct ncsi_dev *nd)
+{
+}
+#endif /* CONFIG_NET_NCSI */
+
+#endif /* __NET_NCSI_H */
index 1e04911..445b019 100644 (file)
@@ -291,6 +291,14 @@ static inline bool nf_is_loopback_packet(const struct sk_buff *skb)
        return skb->dev && skb->skb_iif && skb->dev->flags & IFF_LOOPBACK;
 }
 
+/* jiffies until ct expires, 0 if already expired */
+static inline unsigned long nf_ct_expires(const struct nf_conn *ct)
+{
+       long timeout = (long)ct->timeout.expires - (long)jiffies;
+
+       return timeout > 0 ? timeout : 0;
+}
+
 struct kernel_param;
 
 int nf_conntrack_set_hashsize(const char *val, struct kernel_param *kp);
index 0ae101e..74fa7eb 100644 (file)
@@ -220,12 +220,13 @@ struct nfc_digital_dev {
        struct list_head cmd_queue;
        struct mutex cmd_lock;
 
-       struct work_struct poll_work;
+       struct delayed_work poll_work;
 
        u8 curr_protocol;
        u8 curr_rf_tech;
        u8 curr_nfc_dep_pni;
        u8 did;
+       u16 dep_rwt;
 
        u8 local_payload_max;
        u8 remote_payload_max;
@@ -237,7 +238,6 @@ struct nfc_digital_dev {
        int nack_count;
 
        struct sk_buff *saved_skb;
-       unsigned int saved_skb_len;
 
        u16 target_fsc;
 
index c25fbde..7ecb457 100644 (file)
@@ -37,10 +37,6 @@ struct nfc_llc *nfc_llc_allocate(const char *name, struct nfc_hci_dev *hdev,
                                 int tx_tailroom, llc_failure_t llc_failure);
 void nfc_llc_free(struct nfc_llc *llc);
 
-void nfc_llc_get_rx_head_tail_room(struct nfc_llc *llc, int *rx_headroom,
-                                  int *rx_tailroom);
-
-
 int nfc_llc_start(struct nfc_llc *llc);
 int nfc_llc_stop(struct nfc_llc *llc);
 void nfc_llc_rcv_from_drv(struct nfc_llc *llc, struct sk_buff *skb);
index fcab4de..ddcee12 100644 (file)
@@ -54,6 +54,8 @@ enum nl802154_commands {
 
        NL802154_CMD_SET_ACKREQ_DEFAULT,
 
+       NL802154_CMD_SET_WPAN_PHY_NETNS,
+
        /* add new commands above here */
 
 #ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL
@@ -124,6 +126,11 @@ enum nl802154_attrs {
 
        NL802154_ATTR_ACKREQ_DEFAULT,
 
+       NL802154_ATTR_PAD,
+
+       NL802154_ATTR_PID,
+       NL802154_ATTR_NETNS_FD,
+
        /* add attributes here, update the policy in nl802154.c */
 
 #ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL
@@ -138,8 +145,6 @@ enum nl802154_attrs {
        NL802154_ATTR_SEC_KEY,
 #endif /* CONFIG_IEEE802154_NL802154_EXPERIMENTAL */
 
-       NL802154_ATTR_PAD,
-
        __NL802154_ATTR_AFTER_LAST,
        NL802154_ATTR_MAX = __NL802154_ATTR_AFTER_LAST - 1
 };
index 83c5ec5..ce93c4b 100644 (file)
@@ -59,6 +59,7 @@
 #include <linux/workqueue.h>   /* We need tq_struct.    */
 #include <linux/sctp.h>                /* We need sctp* header structs.  */
 #include <net/sctp/auth.h>     /* We need auth specific structs */
+#include <net/ip.h>            /* For inet_skb_parm */
 
 /* A convenience structure for handling sockaddr structures.
  * We should wean ourselves off this.
@@ -602,6 +603,16 @@ struct sctp_chunk {
        /* This needs to be recoverable for SCTP_SEND_FAILED events. */
        struct sctp_sndrcvinfo sinfo;
 
+       /* We use this field to record param for prsctp policies,
+        * for TTL policy, it is the time_to_drop of this chunk,
+        * for RTX policy, it is the max_sent_count of this chunk,
+        * for PRIO policy, it is the priority of this chunk.
+        */
+       unsigned long prsctp_param;
+
+       /* How many times this chunk have been sent, for prsctp RTX policy */
+       int sent_count;
+
        /* Which association does this belong to?  */
        struct sctp_association *asoc;
 
@@ -1074,12 +1085,36 @@ void sctp_retransmit(struct sctp_outq *, struct sctp_transport *,
                     sctp_retransmit_reason_t);
 void sctp_retransmit_mark(struct sctp_outq *, struct sctp_transport *, __u8);
 int sctp_outq_uncork(struct sctp_outq *, gfp_t gfp);
+void sctp_prsctp_prune(struct sctp_association *asoc,
+                      struct sctp_sndrcvinfo *sinfo, int msg_len);
 /* Uncork and flush an outqueue.  */
 static inline void sctp_outq_cork(struct sctp_outq *q)
 {
        q->cork = 1;
 }
 
+/* SCTP skb control block.
+ * sctp_input_cb is currently used on rx and sock rx queue
+ */
+struct sctp_input_cb {
+       union {
+               struct inet_skb_parm    h4;
+#if IS_ENABLED(CONFIG_IPV6)
+               struct inet6_skb_parm   h6;
+#endif
+       } header;
+       struct sctp_chunk *chunk;
+       struct sctp_af *af;
+};
+#define SCTP_INPUT_CB(__skb)   ((struct sctp_input_cb *)&((__skb)->cb[0]))
+
+static inline const struct sk_buff *sctp_gso_headskb(const struct sk_buff *skb)
+{
+       const struct sctp_chunk *chunk = SCTP_INPUT_CB(skb)->chunk;
+
+       return chunk->head_skb ? : skb;
+}
+
 /* These bind address data fields common between endpoints and associations */
 struct sctp_bind_addr {
 
@@ -1256,7 +1291,8 @@ struct sctp_endpoint {
        /* SCTP-AUTH: endpoint shared keys */
        struct list_head endpoint_shared_keys;
        __u16 active_key_id;
-       __u8  auth_enable;
+       __u8  auth_enable:1,
+             prsctp_enable:1;
 };
 
 /* Recover the outter endpoint structure. */
@@ -1848,9 +1884,15 @@ struct sctp_association {
        __u16 active_key_id;
 
        __u8 need_ecne:1,       /* Need to send an ECNE Chunk? */
-            temp:1;            /* Is it a temporary association? */
+            temp:1,            /* Is it a temporary association? */
+            prsctp_enable:1;
 
        struct sctp_priv_assoc_stats stats;
+
+       int sent_cnt_removable;
+
+       __u64 abandoned_unsent[SCTP_PR_INDEX(MAX) + 1];
+       __u64 abandoned_sent[SCTP_PR_INDEX(MAX) + 1];
 };
 
 
index cccdcfd..2c098cd 100644 (file)
  */
 struct sctp_ulpevent {
        struct sctp_association *asoc;
-       __u16 stream;
-       __u16 ssn;
-       __u16 flags;
+       struct sctp_chunk *chunk;
+       unsigned int rmem_len;
        __u32 ppid;
        __u32 tsn;
        __u32 cumtsn;
-       int msg_flags;
-       int iif;
-       unsigned int rmem_len;
+       __u16 stream;
+       __u16 ssn;
+       __u16 flags;
+       __u16 msg_flags;
 };
 
 /* Retrieve the skb this event sits inside of. */
index 649d2a8..ff5be7e 100644 (file)
@@ -1576,7 +1576,13 @@ static inline void sock_put(struct sock *sk)
  */
 void sock_gen_put(struct sock *sk);
 
-int sk_receive_skb(struct sock *sk, struct sk_buff *skb, const int nested);
+int __sk_receive_skb(struct sock *sk, struct sk_buff *skb, const int nested,
+                    unsigned int trim_cap);
+static inline int sk_receive_skb(struct sock *sk, struct sk_buff *skb,
+                                const int nested)
+{
+       return __sk_receive_skb(sk, skb, nested, 1);
+}
 
 static inline void sk_tx_queue_set(struct sock *sk, int tx_queue)
 {
index 985619a..62f6a96 100644 (file)
@@ -60,7 +60,7 @@ struct switchdev_attr {
                struct netdev_phys_item_id ppid;        /* PORT_PARENT_ID */
                u8 stp_state;                           /* PORT_STP_STATE */
                unsigned long brport_flags;             /* PORT_BRIDGE_FLAGS */
-               u32 ageing_time;                        /* BRIDGE_AGEING_TIME */
+               clock_t ageing_time;                    /* BRIDGE_AGEING_TIME */
                bool vlan_filtering;                    /* BRIDGE_VLAN_FILTERING */
        } u;
 };
@@ -227,6 +227,8 @@ void switchdev_port_fwd_mark_set(struct net_device *dev,
                                 struct net_device *group_dev,
                                 bool joining);
 
+bool switchdev_port_same_parent_id(struct net_device *a,
+                                  struct net_device *b);
 #else
 
 static inline void switchdev_deferred_process(void)
@@ -351,6 +353,12 @@ static inline void switchdev_port_fwd_mark_set(struct net_device *dev,
 {
 }
 
+static inline bool switchdev_port_same_parent_id(struct net_device *a,
+                                                struct net_device *b)
+{
+       return false;
+}
+
 #endif
 
 #endif /* _LINUX_SWITCHDEV_H_ */
diff --git a/include/trace/events/devlink.h b/include/trace/events/devlink.h
new file mode 100644 (file)
index 0000000..09f1df2
--- /dev/null
@@ -0,0 +1,68 @@
+#if IS_ENABLED(CONFIG_NET_DEVLINK)
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM devlink
+
+#if !defined(_TRACE_DEVLINK_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_DEVLINK_H
+
+#include <linux/device.h>
+#include <net/devlink.h>
+#include <linux/tracepoint.h>
+
+/*
+ * Tracepoint for devlink hardware message:
+ */
+TRACE_EVENT(devlink_hwmsg,
+       TP_PROTO(const struct devlink *devlink, bool incoming,
+                unsigned long type, const u8 *buf, size_t len),
+
+       TP_ARGS(devlink, incoming, type, buf, len),
+
+       TP_STRUCT__entry(
+               __string(bus_name, devlink->dev->bus->name)
+               __string(dev_name, dev_name(devlink->dev))
+               __string(driver_name, devlink->dev->driver->name)
+               __field(bool, incoming)
+               __field(unsigned long, type)
+               __dynamic_array(u8, buf, len)
+               __field(size_t, len)
+       ),
+
+       TP_fast_assign(
+               __assign_str(bus_name, devlink->dev->bus->name);
+               __assign_str(dev_name, dev_name(devlink->dev));
+               __assign_str(driver_name, devlink->dev->driver->name);
+               __entry->incoming = incoming;
+               __entry->type = type;
+               memcpy(__get_dynamic_array(buf), buf, len);
+               __entry->len = len;
+       ),
+
+       TP_printk("bus_name=%s dev_name=%s driver_name=%s incoming=%d type=%lu buf=0x[%*phD] len=%zu",
+                 __get_str(bus_name), __get_str(dev_name),
+                 __get_str(driver_name), __entry->incoming, __entry->type,
+                 (int) __entry->len, __get_dynamic_array(buf), __entry->len)
+);
+
+#endif /* _TRACE_DEVLINK_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
+
+#else /* CONFIG_NET_DEVLINK */
+
+#if !defined(_TRACE_DEVLINK_H)
+#define _TRACE_DEVLINK_H
+
+#include <net/devlink.h>
+
+static inline void trace_devlink_hwmsg(const struct devlink *devlink,
+                                      bool incoming, unsigned long type,
+                                      const u8 *buf, size_t len)
+{
+}
+
+#endif /* _TRACE_DEVLINK_H */
+
+#endif
index 8fe1e93..0b9e513 100644 (file)
 
 TRACE_EVENT(napi_poll,
 
-       TP_PROTO(struct napi_struct *napi),
+       TP_PROTO(struct napi_struct *napi, int work, int budget),
 
-       TP_ARGS(napi),
+       TP_ARGS(napi, work, budget),
 
        TP_STRUCT__entry(
                __field(        struct napi_struct *,   napi)
                __string(       dev_name, napi->dev ? napi->dev->name : NO_DEV)
+               __field(        int,                    work)
+               __field(        int,                    budget)
        ),
 
        TP_fast_assign(
                __entry->napi = napi;
                __assign_str(dev_name, napi->dev ? napi->dev->name : NO_DEV);
+               __entry->work = work;
+               __entry->budget = budget;
        ),
 
-       TP_printk("napi poll on napi struct %p for device %s",
-               __entry->napi, __get_str(dev_name))
+       TP_printk("napi poll on napi struct %p for device %s work %d budget %d",
+                 __entry->napi, __get_str(dev_name),
+                 __entry->work, __entry->budget)
 );
 
 #undef NO_DEV
index 8bdae34..ec10cfe 100644 (file)
@@ -245,6 +245,7 @@ endif
 header-y += hw_breakpoint.h
 header-y += l2tp.h
 header-y += libc-compat.h
+header-y += lirc.h
 header-y += limits.h
 header-y += llc.h
 header-y += loop.h
index c14ca1c..2b7076f 100644 (file)
@@ -94,6 +94,7 @@ enum bpf_prog_type {
        BPF_PROG_TYPE_SCHED_CLS,
        BPF_PROG_TYPE_SCHED_ACT,
        BPF_PROG_TYPE_TRACEPOINT,
+       BPF_PROG_TYPE_XDP,
 };
 
 #define BPF_PSEUDO_MAP_FD      1
@@ -357,6 +358,13 @@ enum bpf_func_id {
         */
        BPF_FUNC_get_hash_recalc,
 
+       /**
+        * u64 bpf_get_current_task(void)
+        * Returns current task_struct
+        * Return: current
+        */
+       BPF_FUNC_get_current_task,
+
        __BPF_FUNC_MAX_ID,
 };
 
@@ -394,6 +402,8 @@ enum bpf_func_id {
 /* BPF_FUNC_perf_event_output and BPF_FUNC_perf_event_read flags. */
 #define BPF_F_INDEX_MASK               0xffffffffULL
 #define BPF_F_CURRENT_CPU              BPF_F_INDEX_MASK
+/* BPF_FUNC_perf_event_output for sk_buff input context. */
+#define BPF_F_CTXLEN_MASK              (0xfffffULL << 32)
 
 /* user accessible mirror of in-kernel sk_buff.
  * new fields can only be added to the end of this structure
@@ -430,4 +440,24 @@ struct bpf_tunnel_key {
        __u32 tunnel_label;
 };
 
+/* User return codes for XDP prog type.
+ * A valid XDP program must return one of these defined values. All other
+ * return codes are reserved for future use. Unknown return codes will result
+ * in packet drop.
+ */
+enum xdp_action {
+       XDP_ABORTED = 0,
+       XDP_DROP,
+       XDP_PASS,
+       XDP_TX,
+};
+
+/* user accessible metadata for XDP packet hook
+ * new fields must be added to the end of this structure
+ */
+struct xdp_md {
+       __u32 data;
+       __u32 data_end;
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
index c3fdfe7..cb5d1a5 100644 (file)
@@ -40,6 +40,7 @@
 #define EM_TILEPRO     188     /* Tilera TILEPro */
 #define EM_MICROBLAZE  189     /* Xilinx MicroBlaze */
 #define EM_TILEGX      191     /* Tilera TILE-Gx */
+#define EM_BPF         247     /* Linux BPF - in-kernel virtual machine */
 #define EM_FRV         0x5441  /* Fujitsu FR-V */
 #define EM_AVR32       0x18ad  /* Atmel AVR32 */
 
index 8304fe6..c186f64 100644 (file)
@@ -261,14 +261,17 @@ enum {
 
 /* IGMP/MLD statistics */
 struct br_mcast_stats {
-       __u64 igmp_queries[BR_MCAST_DIR_SIZE];
+       __u64 igmp_v1queries[BR_MCAST_DIR_SIZE];
+       __u64 igmp_v2queries[BR_MCAST_DIR_SIZE];
+       __u64 igmp_v3queries[BR_MCAST_DIR_SIZE];
        __u64 igmp_leaves[BR_MCAST_DIR_SIZE];
        __u64 igmp_v1reports[BR_MCAST_DIR_SIZE];
        __u64 igmp_v2reports[BR_MCAST_DIR_SIZE];
        __u64 igmp_v3reports[BR_MCAST_DIR_SIZE];
        __u64 igmp_parse_errors;
 
-       __u64 mld_queries[BR_MCAST_DIR_SIZE];
+       __u64 mld_v1queries[BR_MCAST_DIR_SIZE];
+       __u64 mld_v2queries[BR_MCAST_DIR_SIZE];
        __u64 mld_leaves[BR_MCAST_DIR_SIZE];
        __u64 mld_v1reports[BR_MCAST_DIR_SIZE];
        __u64 mld_v2reports[BR_MCAST_DIR_SIZE];
index cec849a..117d02e 100644 (file)
@@ -87,6 +87,7 @@
 #define ETH_P_8021AH   0x88E7          /* 802.1ah Backbone Service Tag */
 #define ETH_P_MVRP     0x88F5          /* 802.1Q MVRP                  */
 #define ETH_P_1588     0x88F7          /* IEEE 1588 Timesync */
+#define ETH_P_NCSI     0x88F8          /* NCSI protocol                */
 #define ETH_P_PRP      0x88FB          /* IEC 62439-3 PRP/HSRv0        */
 #define ETH_P_FCOE     0x8906          /* Fibre Channel over Ethernet  */
 #define ETH_P_TDLS     0x890D          /* TDLS */
index 4285ac3..a1b5202 100644 (file)
@@ -156,6 +156,7 @@ enum {
        IFLA_GSO_MAX_SEGS,
        IFLA_GSO_MAX_SIZE,
        IFLA_PAD,
+       IFLA_XDP,
        __IFLA_MAX
 };
 
@@ -843,4 +844,15 @@ enum {
 };
 #define LINK_XSTATS_TYPE_MAX (__LINK_XSTATS_TYPE_MAX - 1)
 
+/* XDP section */
+
+enum {
+       IFLA_XDP_UNSPEC,
+       IFLA_XDP_FD,
+       IFLA_XDP_ATTACHED,
+       __IFLA_XDP_MAX,
+};
+
+#define IFLA_XDP_MAX (__IFLA_XDP_MAX - 1)
+
 #endif /* _UAPI_LINUX_IF_LINK_H */
index 737fa32..d6d071f 100644 (file)
 #define SW_ROTATE_LOCK         0x0c  /* set = rotate locked/disabled */
 #define SW_LINEIN_INSERT       0x0d  /* set = inserted */
 #define SW_MUTE_DEVICE         0x0e  /* set = device disabled */
+#define SW_PEN_INSERTED                0x0f  /* set = pen inserted */
 #define SW_MAX                 0x0f
 #define SW_CNT                 (SW_MAX+1)
 
index ce70fe6..d304f4c 100644 (file)
@@ -112,6 +112,31 @@ typedef __s32 sctp_assoc_t;
 #define SCTP_SOCKOPT_CONNECTX  110             /* CONNECTX requests. */
 #define SCTP_SOCKOPT_CONNECTX3 111     /* CONNECTX requests (updated) */
 #define SCTP_GET_ASSOC_STATS   112     /* Read only */
+#define SCTP_PR_SUPPORTED      113
+#define SCTP_DEFAULT_PRINFO    114
+#define SCTP_PR_ASSOC_STATUS   115
+
+/* PR-SCTP policies */
+#define SCTP_PR_SCTP_NONE      0x0000
+#define SCTP_PR_SCTP_TTL       0x0010
+#define SCTP_PR_SCTP_RTX       0x0020
+#define SCTP_PR_SCTP_PRIO      0x0030
+#define SCTP_PR_SCTP_MAX       SCTP_PR_SCTP_PRIO
+#define SCTP_PR_SCTP_MASK      0x0030
+
+#define __SCTP_PR_INDEX(x)     ((x >> 4) - 1)
+#define SCTP_PR_INDEX(x)       __SCTP_PR_INDEX(SCTP_PR_SCTP_ ## x)
+
+#define SCTP_PR_POLICY(x)      ((x) & SCTP_PR_SCTP_MASK)
+#define SCTP_PR_SET_POLICY(flags, x)   \
+       do {                            \
+               flags &= ~SCTP_PR_SCTP_MASK;    \
+               flags |= x;             \
+       } while (0)
+
+#define SCTP_PR_TTL_ENABLED(x) (SCTP_PR_POLICY(x) == SCTP_PR_SCTP_TTL)
+#define SCTP_PR_RTX_ENABLED(x) (SCTP_PR_POLICY(x) == SCTP_PR_SCTP_RTX)
+#define SCTP_PR_PRIO_ENABLED(x)        (SCTP_PR_POLICY(x) == SCTP_PR_SCTP_PRIO)
 
 /* These are bit fields for msghdr->msg_flags.  See section 5.1.  */
 /* On user space Linux, these live in <bits/socket.h> as an enum.  */
@@ -902,4 +927,21 @@ struct sctp_paddrthlds {
        __u16 spt_pathpfthld;
 };
 
+/*
+ * Socket Option for Getting the Association/Stream-Specific PR-SCTP Status
+ */
+struct sctp_prstatus {
+       sctp_assoc_t sprstat_assoc_id;
+       __u16 sprstat_sid;
+       __u16 sprstat_policy;
+       __u64 sprstat_abandoned_unsent;
+       __u64 sprstat_abandoned_sent;
+};
+
+struct sctp_default_prinfo {
+       sctp_assoc_t pr_assoc_id;
+       __u32 pr_value;
+       __u16 pr_policy;
+};
+
 #endif /* _UAPI_SCTP_H */
index f755a60..c02d897 100644 (file)
@@ -1458,6 +1458,7 @@ config KALLSYMS_ALL
 
 config KALLSYMS_ABSOLUTE_PERCPU
        bool
+       depends on KALLSYMS
        default X86_64 && SMP
 
 config KALLSYMS_BASE_RELATIVE
index db1a743..633a650 100644 (file)
@@ -430,7 +430,7 @@ static struct bpf_event_entry *bpf_event_entry_gen(struct file *perf_file,
 {
        struct bpf_event_entry *ee;
 
-       ee = kzalloc(sizeof(*ee), GFP_KERNEL);
+       ee = kzalloc(sizeof(*ee), GFP_ATOMIC);
        if (ee) {
                ee->event = perf_file->private_data;
                ee->perf_file = perf_file;
index d638062..03fd23d 100644 (file)
@@ -1054,9 +1054,11 @@ const struct bpf_func_proto * __weak bpf_get_trace_printk_proto(void)
        return NULL;
 }
 
-const struct bpf_func_proto * __weak bpf_get_event_output_proto(void)
+u64 __weak
+bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+                void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy)
 {
-       return NULL;
+       return -ENOTSUPP;
 }
 
 /* Always built-in helper functions. */
index 318858e..5967b87 100644 (file)
@@ -11,7 +11,7 @@
  * version 2 as published by the Free Software Foundation.
  */
 
-#include <linux/module.h>
+#include <linux/init.h>
 #include <linux/magic.h>
 #include <linux/major.h>
 #include <linux/mount.h>
@@ -367,8 +367,6 @@ static struct file_system_type bpf_fs_type = {
        .kill_sb        = kill_litter_super,
 };
 
-MODULE_ALIAS_FS("bpf");
-
 static int __init bpf_init(void)
 {
        int ret;
index 96d938a..228f962 100644 (file)
@@ -670,14 +670,20 @@ static struct bpf_prog *____bpf_prog_get(struct fd f)
        return f.file->private_data;
 }
 
-struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog)
+struct bpf_prog *bpf_prog_add(struct bpf_prog *prog, int i)
 {
-       if (atomic_inc_return(&prog->aux->refcnt) > BPF_MAX_REFCNT) {
-               atomic_dec(&prog->aux->refcnt);
+       if (atomic_add_return(i, &prog->aux->refcnt) > BPF_MAX_REFCNT) {
+               atomic_sub(i, &prog->aux->refcnt);
                return ERR_PTR(-EBUSY);
        }
        return prog;
 }
+EXPORT_SYMBOL_GPL(bpf_prog_add);
+
+struct bpf_prog *bpf_prog_inc(struct bpf_prog *prog)
+{
+       return bpf_prog_add(prog, 1);
+}
 
 static struct bpf_prog *__bpf_prog_get(u32 ufd, enum bpf_prog_type *type)
 {
index e206c21..f72f23b 100644 (file)
@@ -653,6 +653,16 @@ static int check_map_access(struct verifier_env *env, u32 regno, int off,
 
 #define MAX_PACKET_OFF 0xffff
 
+static bool may_write_pkt_data(enum bpf_prog_type type)
+{
+       switch (type) {
+       case BPF_PROG_TYPE_XDP:
+               return true;
+       default:
+               return false;
+       }
+}
+
 static int check_packet_access(struct verifier_env *env, u32 regno, int off,
                               int size)
 {
@@ -713,6 +723,7 @@ static int check_ptr_alignment(struct verifier_env *env, struct reg_state *reg,
        switch (env->prog->type) {
        case BPF_PROG_TYPE_SCHED_CLS:
        case BPF_PROG_TYPE_SCHED_ACT:
+       case BPF_PROG_TYPE_XDP:
                break;
        default:
                verbose("verifier is misconfigured\n");
@@ -805,10 +816,15 @@ static int check_mem_access(struct verifier_env *env, u32 regno, int off,
                        err = check_stack_read(state, off, size, value_regno);
                }
        } else if (state->regs[regno].type == PTR_TO_PACKET) {
-               if (t == BPF_WRITE) {
+               if (t == BPF_WRITE && !may_write_pkt_data(env->prog->type)) {
                        verbose("cannot write into packet\n");
                        return -EACCES;
                }
+               if (t == BPF_WRITE && value_regno >= 0 &&
+                   is_pointer_value(env, value_regno)) {
+                       verbose("R%d leaks addr into packet\n", value_regno);
+                       return -EACCES;
+               }
                err = check_packet_access(env, regno, off, size);
                if (!err && t == BPF_READ && value_regno >= 0)
                        mark_reg_unknown_value(state->regs, value_regno);
index d948e44..7b61887 100644 (file)
@@ -1201,6 +1201,8 @@ static struct cpuhp_step cpuhp_bp_states[] = {
                .teardown               = takedown_cpu,
                .cant_stop              = true,
        },
+#else
+       [CPUHP_BRINGUP_CPU] = { },
 #endif
 };
 
index 9c51ec3..195e765 100644 (file)
@@ -1678,12 +1678,33 @@ static bool is_orphaned_event(struct perf_event *event)
        return event->state == PERF_EVENT_STATE_DEAD;
 }
 
-static inline int pmu_filter_match(struct perf_event *event)
+static inline int __pmu_filter_match(struct perf_event *event)
 {
        struct pmu *pmu = event->pmu;
        return pmu->filter_match ? pmu->filter_match(event) : 1;
 }
 
+/*
+ * Check whether we should attempt to schedule an event group based on
+ * PMU-specific filtering. An event group can consist of HW and SW events,
+ * potentially with a SW leader, so we must check all the filters, to
+ * determine whether a group is schedulable:
+ */
+static inline int pmu_filter_match(struct perf_event *event)
+{
+       struct perf_event *child;
+
+       if (!__pmu_filter_match(event))
+               return 0;
+
+       list_for_each_entry(child, &event->sibling_list, group_entry) {
+               if (!__pmu_filter_match(child))
+                       return 0;
+       }
+
+       return 1;
+}
+
 static inline int
 event_filter_match(struct perf_event *event)
 {
@@ -5553,16 +5574,26 @@ void perf_output_sample(struct perf_output_handle *handle,
        }
 
        if (sample_type & PERF_SAMPLE_RAW) {
-               if (data->raw) {
-                       u32 raw_size = data->raw->size;
-                       u32 real_size = round_up(raw_size + sizeof(u32),
-                                                sizeof(u64)) - sizeof(u32);
-                       u64 zero = 0;
-
-                       perf_output_put(handle, real_size);
-                       __output_copy(handle, data->raw->data, raw_size);
-                       if (real_size - raw_size)
-                               __output_copy(handle, &zero, real_size - raw_size);
+               struct perf_raw_record *raw = data->raw;
+
+               if (raw) {
+                       struct perf_raw_frag *frag = &raw->frag;
+
+                       perf_output_put(handle, raw->size);
+                       do {
+                               if (frag->copy) {
+                                       __output_custom(handle, frag->copy,
+                                                       frag->data, frag->size);
+                               } else {
+                                       __output_copy(handle, frag->data,
+                                                     frag->size);
+                               }
+                               if (perf_raw_frag_last(frag))
+                                       break;
+                               frag = frag->next;
+                       } while (1);
+                       if (frag->pad)
+                               __output_skip(handle, NULL, frag->pad);
                } else {
                        struct {
                                u32     size;
@@ -5687,14 +5718,28 @@ void perf_prepare_sample(struct perf_event_header *header,
        }
 
        if (sample_type & PERF_SAMPLE_RAW) {
-               int size = sizeof(u32);
-
-               if (data->raw)
-                       size += data->raw->size;
-               else
-                       size += sizeof(u32);
+               struct perf_raw_record *raw = data->raw;
+               int size;
+
+               if (raw) {
+                       struct perf_raw_frag *frag = &raw->frag;
+                       u32 sum = 0;
+
+                       do {
+                               sum += frag->size;
+                               if (perf_raw_frag_last(frag))
+                                       break;
+                               frag = frag->next;
+                       } while (1);
+
+                       size = round_up(sum + sizeof(u32), sizeof(u64));
+                       raw->size = size - sizeof(u32);
+                       frag->pad = raw->size - sum;
+               } else {
+                       size = sizeof(u64);
+               }
 
-               header->size += round_up(size, sizeof(u64));
+               header->size += size;
        }
 
        if (sample_type & PERF_SAMPLE_BRANCH_STACK) {
@@ -7331,7 +7376,7 @@ static struct pmu perf_swevent = {
 static int perf_tp_filter_match(struct perf_event *event,
                                struct perf_sample_data *data)
 {
-       void *record = data->raw->data;
+       void *record = data->raw->frag.data;
 
        /* only top level events have filters set */
        if (event->parent)
@@ -7387,8 +7432,10 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
        struct perf_event *event;
 
        struct perf_raw_record raw = {
-               .size = entry_size,
-               .data = record,
+               .frag = {
+                       .size = entry_size,
+                       .data = record,
+               },
        };
 
        perf_sample_data_init(&data, 0, 0);
index 05f9f6d..2417eb5 100644 (file)
@@ -123,10 +123,7 @@ static inline unsigned long perf_aux_size(struct ring_buffer *rb)
        return rb->aux_nr_pages << PAGE_SHIFT;
 }
 
-#define DEFINE_OUTPUT_COPY(func_name, memcpy_func)                     \
-static inline unsigned long                                            \
-func_name(struct perf_output_handle *handle,                           \
-         const void *buf, unsigned long len)                           \
+#define __DEFINE_OUTPUT_COPY_BODY(memcpy_func)                         \
 {                                                                      \
        unsigned long size, written;                                    \
                                                                        \
@@ -152,6 +149,17 @@ func_name(struct perf_output_handle *handle,                               \
        return len;                                                     \
 }
 
+#define DEFINE_OUTPUT_COPY(func_name, memcpy_func)                     \
+static inline unsigned long                                            \
+func_name(struct perf_output_handle *handle,                           \
+         const void *buf, unsigned long len)                           \
+__DEFINE_OUTPUT_COPY_BODY(memcpy_func)
+
+static inline unsigned long
+__output_custom(struct perf_output_handle *handle, perf_copy_f copy_func,
+               const void *buf, unsigned long len)
+__DEFINE_OUTPUT_COPY_BODY(copy_func)
+
 static inline unsigned long
 memcpy_common(void *dst, const void *src, unsigned long n)
 {
index e25e92f..6a5c239 100644 (file)
@@ -18,7 +18,7 @@
 #include <linux/vmalloc.h>
 #include "gcov.h"
 
-#if __GNUC__ == 5 && __GNUC_MINOR__ >= 1
+#if (__GNUC__ > 5) || (__GNUC__ == 5 && __GNUC_MINOR__ >= 1)
 #define GCOV_COUNTERS                  10
 #elif __GNUC__ == 4 && __GNUC_MINOR__ >= 9
 #define GCOV_COUNTERS                  9
index 51d7105..97ee9ac 100644 (file)
@@ -5394,13 +5394,15 @@ void idle_task_exit(void)
 /*
  * Since this CPU is going 'away' for a while, fold any nr_active delta
  * we might have. Assumes we're called after migrate_tasks() so that the
- * nr_active count is stable.
+ * nr_active count is stable. We need to take the teardown thread which
+ * is calling this into account, so we hand in adjust = 1 to the load
+ * calculation.
  *
  * Also see the comment "Global load-average calculations".
  */
 static void calc_load_migrate(struct rq *rq)
 {
-       long delta = calc_load_fold_active(rq);
+       long delta = calc_load_fold_active(rq, 1);
        if (delta)
                atomic_long_add(delta, &calc_load_tasks);
 }
index bdcbeea..c8c5d2d 100644 (file)
@@ -735,8 +735,6 @@ void post_init_entity_util_avg(struct sched_entity *se)
        }
 }
 
-static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq);
-static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq);
 #else
 void init_entity_runnable_average(struct sched_entity *se)
 {
@@ -2499,28 +2497,22 @@ account_entity_dequeue(struct cfs_rq *cfs_rq, struct sched_entity *se)
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 # ifdef CONFIG_SMP
-static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq)
+static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg)
 {
-       long tg_weight;
+       long tg_weight, load, shares;
 
        /*
-        * Use this CPU's real-time load instead of the last load contribution
-        * as the updating of the contribution is delayed, and we will use the
-        * the real-time load to calc the share. See update_tg_load_avg().
+        * This really should be: cfs_rq->avg.load_avg, but instead we use
+        * cfs_rq->load.weight, which is its upper bound. This helps ramp up
+        * the shares for small weight interactive tasks.
         */
-       tg_weight = atomic_long_read(&tg->load_avg);
-       tg_weight -= cfs_rq->tg_load_avg_contrib;
-       tg_weight += cfs_rq->load.weight;
+       load = scale_load_down(cfs_rq->load.weight);
 
-       return tg_weight;
-}
-
-static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg)
-{
-       long tg_weight, load, shares;
+       tg_weight = atomic_long_read(&tg->load_avg);
 
-       tg_weight = calc_tg_weight(tg, cfs_rq);
-       load = cfs_rq->load.weight;
+       /* Ensure tg_weight >= load */
+       tg_weight -= cfs_rq->tg_load_avg_contrib;
+       tg_weight += load;
 
        shares = (tg->shares * load);
        if (tg_weight)
@@ -2539,6 +2531,7 @@ static inline long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg)
        return tg->shares;
 }
 # endif /* CONFIG_SMP */
+
 static void reweight_entity(struct cfs_rq *cfs_rq, struct sched_entity *se,
                            unsigned long weight)
 {
@@ -4946,19 +4939,24 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
                return wl;
 
        for_each_sched_entity(se) {
-               long w, W;
+               struct cfs_rq *cfs_rq = se->my_q;
+               long W, w = cfs_rq_load_avg(cfs_rq);
 
-               tg = se->my_q->tg;
+               tg = cfs_rq->tg;
 
                /*
                 * W = @wg + \Sum rw_j
                 */
-               W = wg + calc_tg_weight(tg, se->my_q);
+               W = wg + atomic_long_read(&tg->load_avg);
+
+               /* Ensure \Sum rw_j >= rw_i */
+               W -= cfs_rq->tg_load_avg_contrib;
+               W += w;
 
                /*
                 * w = rw_i + @wl
                 */
-               w = cfs_rq_load_avg(se->my_q) + wl;
+               w += wl;
 
                /*
                 * wl = S * s'_i; see (2)
index b0b93fd..a2d6eb7 100644 (file)
@@ -78,11 +78,11 @@ void get_avenrun(unsigned long *loads, unsigned long offset, int shift)
        loads[2] = (avenrun[2] + offset) << shift;
 }
 
-long calc_load_fold_active(struct rq *this_rq)
+long calc_load_fold_active(struct rq *this_rq, long adjust)
 {
        long nr_active, delta = 0;
 
-       nr_active = this_rq->nr_running;
+       nr_active = this_rq->nr_running - adjust;
        nr_active += (long)this_rq->nr_uninterruptible;
 
        if (nr_active != this_rq->calc_load_active) {
@@ -188,7 +188,7 @@ void calc_load_enter_idle(void)
         * We're going into NOHZ mode, if there's any pending delta, fold it
         * into the pending idle delta.
         */
-       delta = calc_load_fold_active(this_rq);
+       delta = calc_load_fold_active(this_rq, 0);
        if (delta) {
                int idx = calc_load_write_idx();
 
@@ -389,7 +389,7 @@ void calc_global_load_tick(struct rq *this_rq)
        if (time_before(jiffies, this_rq->calc_load_update))
                return;
 
-       delta  = calc_load_fold_active(this_rq);
+       delta  = calc_load_fold_active(this_rq, 0);
        if (delta)
                atomic_long_add(delta, &calc_load_tasks);
 
index 7cbeb92..898c0d2 100644 (file)
@@ -28,7 +28,7 @@ extern unsigned long calc_load_update;
 extern atomic_long_t calc_load_tasks;
 
 extern void calc_global_load_tick(struct rq *this_rq);
-extern long calc_load_fold_active(struct rq *this_rq);
+extern long calc_load_fold_active(struct rq *this_rq, long adjust);
 
 #ifdef CONFIG_SMP
 extern void cpu_load_update_active(struct rq *this_rq);
index 1cafba8..39008d7 100644 (file)
@@ -777,6 +777,7 @@ static void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec *itp)
                        timer->it.cpu.expires = 0;
                        sample_to_timespec(timer->it_clock, timer->it.cpu.expires,
                                           &itp->it_value);
+                       return;
                } else {
                        cpu_timer_sample_group(timer->it_clock, p, &now);
                        unlock_task_sighand(p, &flags);
index 19c5b4a..a12bbd3 100644 (file)
@@ -233,24 +233,17 @@ static const struct bpf_func_proto bpf_perf_event_read_proto = {
        .arg2_type      = ARG_ANYTHING,
 };
 
-static u64 bpf_perf_event_output(u64 r1, u64 r2, u64 flags, u64 r4, u64 size)
+static __always_inline u64
+__bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
+                       u64 flags, struct perf_raw_record *raw)
 {
-       struct pt_regs *regs = (struct pt_regs *) (long) r1;
-       struct bpf_map *map = (struct bpf_map *) (long) r2;
        struct bpf_array *array = container_of(map, struct bpf_array, map);
        unsigned int cpu = smp_processor_id();
        u64 index = flags & BPF_F_INDEX_MASK;
-       void *data = (void *) (long) r4;
        struct perf_sample_data sample_data;
        struct bpf_event_entry *ee;
        struct perf_event *event;
-       struct perf_raw_record raw = {
-               .size = size,
-               .data = data,
-       };
 
-       if (unlikely(flags & ~(BPF_F_INDEX_MASK)))
-               return -EINVAL;
        if (index == BPF_F_CURRENT_CPU)
                index = cpu;
        if (unlikely(index >= array->map.max_entries))
@@ -269,11 +262,29 @@ static u64 bpf_perf_event_output(u64 r1, u64 r2, u64 flags, u64 r4, u64 size)
                return -EOPNOTSUPP;
 
        perf_sample_data_init(&sample_data, 0, 0);
-       sample_data.raw = &raw;
+       sample_data.raw = raw;
        perf_event_output(event, &sample_data, regs);
        return 0;
 }
 
+static u64 bpf_perf_event_output(u64 r1, u64 r2, u64 flags, u64 r4, u64 size)
+{
+       struct pt_regs *regs = (struct pt_regs *)(long) r1;
+       struct bpf_map *map  = (struct bpf_map *)(long) r2;
+       void *data = (void *)(long) r4;
+       struct perf_raw_record raw = {
+               .frag = {
+                       .size = size,
+                       .data = data,
+               },
+       };
+
+       if (unlikely(flags & ~(BPF_F_INDEX_MASK)))
+               return -EINVAL;
+
+       return __bpf_perf_event_output(regs, map, flags, &raw);
+}
+
 static const struct bpf_func_proto bpf_perf_event_output_proto = {
        .func           = bpf_perf_event_output,
        .gpl_only       = true,
@@ -287,31 +298,41 @@ static const struct bpf_func_proto bpf_perf_event_output_proto = {
 
 static DEFINE_PER_CPU(struct pt_regs, bpf_pt_regs);
 
-static u64 bpf_event_output(u64 r1, u64 r2, u64 flags, u64 r4, u64 size)
+u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+                    void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy)
 {
        struct pt_regs *regs = this_cpu_ptr(&bpf_pt_regs);
+       struct perf_raw_frag frag = {
+               .copy           = ctx_copy,
+               .size           = ctx_size,
+               .data           = ctx,
+       };
+       struct perf_raw_record raw = {
+               .frag = {
+                       {
+                               .next   = ctx_size ? &frag : NULL,
+                       },
+                       .size   = meta_size,
+                       .data   = meta,
+               },
+       };
 
        perf_fetch_caller_regs(regs);
 
-       return bpf_perf_event_output((long)regs, r2, flags, r4, size);
+       return __bpf_perf_event_output(regs, map, flags, &raw);
 }
 
-static const struct bpf_func_proto bpf_event_output_proto = {
-       .func           = bpf_event_output,
+static u64 bpf_get_current_task(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
+{
+       return (long) current;
+}
+
+static const struct bpf_func_proto bpf_get_current_task_proto = {
+       .func           = bpf_get_current_task,
        .gpl_only       = true,
        .ret_type       = RET_INTEGER,
-       .arg1_type      = ARG_PTR_TO_CTX,
-       .arg2_type      = ARG_CONST_MAP_PTR,
-       .arg3_type      = ARG_ANYTHING,
-       .arg4_type      = ARG_PTR_TO_STACK,
-       .arg5_type      = ARG_CONST_STACK_SIZE,
 };
 
-const struct bpf_func_proto *bpf_get_event_output_proto(void)
-{
-       return &bpf_event_output_proto;
-}
-
 static const struct bpf_func_proto *tracing_func_proto(enum bpf_func_id func_id)
 {
        switch (func_id) {
@@ -329,6 +350,8 @@ static const struct bpf_func_proto *tracing_func_proto(enum bpf_func_id func_id)
                return &bpf_tail_call_proto;
        case BPF_FUNC_get_current_pid_tgid:
                return &bpf_get_current_pid_tgid_proto;
+       case BPF_FUNC_get_current_task:
+               return &bpf_get_current_task_proto;
        case BPF_FUNC_get_current_uid_gid:
                return &bpf_get_current_uid_gid_proto;
        case BPF_FUNC_get_current_comm:
index e1c0e99..97e7b79 100644 (file)
@@ -4600,15 +4600,11 @@ static void restore_unbound_workers_cpumask(struct worker_pool *pool, int cpu)
        if (!cpumask_test_cpu(cpu, pool->attrs->cpumask))
                return;
 
-       /* is @cpu the only online CPU? */
        cpumask_and(&cpumask, pool->attrs->cpumask, cpu_online_mask);
-       if (cpumask_weight(&cpumask) != 1)
-               return;
 
        /* as we're called from CPU_ONLINE, the following shouldn't fail */
        for_each_pool_worker(worker, pool)
-               WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
-                                                 pool->attrs->cpumask) < 0);
+               WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, &cpumask) < 0);
 }
 
 /*
index 1356454..eb8a19f 100644 (file)
@@ -539,17 +539,39 @@ void rb_replace_node(struct rb_node *victim, struct rb_node *new,
 {
        struct rb_node *parent = rb_parent(victim);
 
+       /* Copy the pointers/colour from the victim to the replacement */
+       *new = *victim;
+
        /* Set the surrounding nodes to point to the replacement */
-       __rb_change_child(victim, new, parent, root);
        if (victim->rb_left)
                rb_set_parent(victim->rb_left, new);
        if (victim->rb_right)
                rb_set_parent(victim->rb_right, new);
+       __rb_change_child(victim, new, parent, root);
+}
+EXPORT_SYMBOL(rb_replace_node);
+
+void rb_replace_node_rcu(struct rb_node *victim, struct rb_node *new,
+                        struct rb_root *root)
+{
+       struct rb_node *parent = rb_parent(victim);
 
        /* Copy the pointers/colour from the victim to the replacement */
        *new = *victim;
+
+       /* Set the surrounding nodes to point to the replacement */
+       if (victim->rb_left)
+               rb_set_parent(victim->rb_left, new);
+       if (victim->rb_right)
+               rb_set_parent(victim->rb_right, new);
+
+       /* Set the parent's pointer to the new node last after an RCU barrier
+        * so that the pointers onwards are seen to be set correctly when doing
+        * an RCU walk over the tree.
+        */
+       __rb_change_child_rcu(victim, new, parent, root);
 }
-EXPORT_SYMBOL(rb_replace_node);
+EXPORT_SYMBOL(rb_replace_node_rcu);
 
 static struct rb_node *rb_left_deepest_node(const struct rb_node *node)
 {
index 79bfe0e..7bc0477 100644 (file)
@@ -1009,8 +1009,6 @@ static void isolate_freepages(struct compact_control *cc)
                                block_end_pfn = block_start_pfn,
                                block_start_pfn -= pageblock_nr_pages,
                                isolate_start_pfn = block_start_pfn) {
-               unsigned long isolated;
-
                /*
                 * This can iterate a massively long zone without finding any
                 * suitable migration targets, so periodically check if we need
@@ -1034,36 +1032,30 @@ static void isolate_freepages(struct compact_control *cc)
                        continue;
 
                /* Found a block suitable for isolating free pages from. */
-               isolated = isolate_freepages_block(cc, &isolate_start_pfn,
-                                               block_end_pfn, freelist, false);
-               /* If isolation failed early, do not continue needlessly */
-               if (!isolated && isolate_start_pfn < block_end_pfn &&
-                   cc->nr_migratepages > cc->nr_freepages)
-                       break;
+               isolate_freepages_block(cc, &isolate_start_pfn, block_end_pfn,
+                                       freelist, false);
 
                /*
-                * If we isolated enough freepages, or aborted due to async
-                * compaction being contended, terminate the loop.
-                * Remember where the free scanner should restart next time,
-                * which is where isolate_freepages_block() left off.
-                * But if it scanned the whole pageblock, isolate_start_pfn
-                * now points at block_end_pfn, which is the start of the next
-                * pageblock.
-                * In that case we will however want to restart at the start
-                * of the previous pageblock.
+                * If we isolated enough freepages, or aborted due to lock
+                * contention, terminate.
                 */
                if ((cc->nr_freepages >= cc->nr_migratepages)
                                                        || cc->contended) {
-                       if (isolate_start_pfn >= block_end_pfn)
+                       if (isolate_start_pfn >= block_end_pfn) {
+                               /*
+                                * Restart at previous pageblock if more
+                                * freepages can be isolated next time.
+                                */
                                isolate_start_pfn =
                                        block_start_pfn - pageblock_nr_pages;
+                       }
                        break;
-               } else {
+               } else if (isolate_start_pfn < block_end_pfn) {
                        /*
-                        * isolate_freepages_block() should not terminate
-                        * prematurely unless contended, or isolated enough
+                        * If isolation failed early, do not continue
+                        * needlessly.
                         */
-                       VM_BUG_ON(isolate_start_pfn < block_end_pfn);
+                       break;
                }
        }
 
index 9ed5853..343a2b7 100644 (file)
@@ -1624,14 +1624,9 @@ int madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
        if (next - addr != HPAGE_PMD_SIZE) {
                get_page(page);
                spin_unlock(ptl);
-               if (split_huge_page(page)) {
-                       put_page(page);
-                       unlock_page(page);
-                       goto out_unlocked;
-               }
+               split_huge_page(page);
                put_page(page);
                unlock_page(page);
-               ret = 1;
                goto out_unlocked;
        }
 
@@ -2989,7 +2984,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 }
 
 void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
-               unsigned long address, bool freeze)
+               unsigned long address, bool freeze, struct page *page)
 {
        spinlock_t *ptl;
        struct mm_struct *mm = vma->vm_mm;
@@ -2997,8 +2992,17 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 
        mmu_notifier_invalidate_range_start(mm, haddr, haddr + HPAGE_PMD_SIZE);
        ptl = pmd_lock(mm, pmd);
+
+       /*
+        * If caller asks to setup a migration entries, we need a page to check
+        * pmd against. Otherwise we can end up replacing wrong page.
+        */
+       VM_BUG_ON(freeze && !page);
+       if (page && page != pmd_page(*pmd))
+               goto out;
+
        if (pmd_trans_huge(*pmd)) {
-               struct page *page = pmd_page(*pmd);
+               page = pmd_page(*pmd);
                if (PageMlocked(page))
                        clear_page_mlock(page);
        } else if (!pmd_devmap(*pmd))
@@ -3025,24 +3029,8 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
                return;
 
        pmd = pmd_offset(pud, address);
-       if (!pmd_present(*pmd) || (!pmd_trans_huge(*pmd) && !pmd_devmap(*pmd)))
-               return;
 
-       /*
-        * If caller asks to setup a migration entries, we need a page to check
-        * pmd against. Otherwise we can end up replacing wrong page.
-        */
-       VM_BUG_ON(freeze && !page);
-       if (page && page != pmd_page(*pmd))
-               return;
-
-       /*
-        * Caller holds the mmap_sem write mode or the anon_vma lock,
-        * so a huge pmd cannot materialize from under us (khugepaged
-        * holds both the mmap_sem write mode and the anon_vma lock
-        * write mode).
-        */
-       __split_huge_pmd(vma, pmd, address, freeze);
+       __split_huge_pmd(vma, pmd, address, freeze, page);
 }
 
 void vma_adjust_trans_huge(struct vm_area_struct *vma,
index c1f3c0b..addfe4a 100644 (file)
@@ -3383,7 +3383,7 @@ retry_avoidcopy:
        /* If no-one else is actually using this page, avoid the copy
         * and just make the page writable */
        if (page_mapcount(old_page) == 1 && PageAnon(old_page)) {
-               page_move_anon_rmap(old_page, vma, address);
+               page_move_anon_rmap(old_page, vma);
                set_huge_ptep_writable(vma, address, ptep);
                return 0;
        }
index 4973505..65793f1 100644 (file)
@@ -238,30 +238,23 @@ static void qlist_move_cache(struct qlist_head *from,
                                   struct qlist_head *to,
                                   struct kmem_cache *cache)
 {
-       struct qlist_node *prev = NULL, *curr;
+       struct qlist_node *curr;
 
        if (unlikely(qlist_empty(from)))
                return;
 
        curr = from->head;
+       qlist_init(from);
        while (curr) {
-               struct qlist_node *qlink = curr;
-               struct kmem_cache *obj_cache = qlink_to_cache(qlink);
-
-               if (obj_cache == cache) {
-                       if (unlikely(from->head == qlink)) {
-                               from->head = curr->next;
-                               prev = curr;
-                       } else
-                               prev->next = curr->next;
-                       if (unlikely(from->tail == qlink))
-                               from->tail = curr->next;
-                       from->bytes -= cache->size;
-                       qlist_put(to, qlink, cache->size);
-               } else {
-                       prev = curr;
-               }
-               curr = curr->next;
+               struct qlist_node *next = curr->next;
+               struct kmem_cache *obj_cache = qlink_to_cache(curr);
+
+               if (obj_cache == cache)
+                       qlist_put(to, curr, obj_cache->size);
+               else
+                       qlist_put(from, curr, obj_cache->size);
+
+               curr = next;
        }
 }
 
index ac8664d..5339c89 100644 (file)
@@ -4057,6 +4057,60 @@ static struct cftype mem_cgroup_legacy_files[] = {
        { },    /* terminate */
 };
 
+/*
+ * Private memory cgroup IDR
+ *
+ * Swap-out records and page cache shadow entries need to store memcg
+ * references in constrained space, so we maintain an ID space that is
+ * limited to 16 bit (MEM_CGROUP_ID_MAX), limiting the total number of
+ * memory-controlled cgroups to 64k.
+ *
+ * However, there usually are many references to the oflline CSS after
+ * the cgroup has been destroyed, such as page cache or reclaimable
+ * slab objects, that don't need to hang on to the ID. We want to keep
+ * those dead CSS from occupying IDs, or we might quickly exhaust the
+ * relatively small ID space and prevent the creation of new cgroups
+ * even when there are much fewer than 64k cgroups - possibly none.
+ *
+ * Maintain a private 16-bit ID space for memcg, and allow the ID to
+ * be freed and recycled when it's no longer needed, which is usually
+ * when the CSS is offlined.
+ *
+ * The only exception to that are records of swapped out tmpfs/shmem
+ * pages that need to be attributed to live ancestors on swapin. But
+ * those references are manageable from userspace.
+ */
+
+static DEFINE_IDR(mem_cgroup_idr);
+
+static void mem_cgroup_id_get(struct mem_cgroup *memcg)
+{
+       atomic_inc(&memcg->id.ref);
+}
+
+static void mem_cgroup_id_put(struct mem_cgroup *memcg)
+{
+       if (atomic_dec_and_test(&memcg->id.ref)) {
+               idr_remove(&mem_cgroup_idr, memcg->id.id);
+               memcg->id.id = 0;
+
+               /* Memcg ID pins CSS */
+               css_put(&memcg->css);
+       }
+}
+
+/**
+ * mem_cgroup_from_id - look up a memcg from a memcg id
+ * @id: the memcg id to look up
+ *
+ * Caller must hold rcu_read_lock().
+ */
+struct mem_cgroup *mem_cgroup_from_id(unsigned short id)
+{
+       WARN_ON_ONCE(!rcu_read_lock_held());
+       return idr_find(&mem_cgroup_idr, id);
+}
+
 static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node)
 {
        struct mem_cgroup_per_node *pn;
@@ -4116,6 +4170,12 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
        if (!memcg)
                return NULL;
 
+       memcg->id.id = idr_alloc(&mem_cgroup_idr, NULL,
+                                1, MEM_CGROUP_ID_MAX,
+                                GFP_KERNEL);
+       if (memcg->id.id < 0)
+               goto fail;
+
        memcg->stat = alloc_percpu(struct mem_cgroup_stat_cpu);
        if (!memcg->stat)
                goto fail;
@@ -4142,8 +4202,11 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
 #ifdef CONFIG_CGROUP_WRITEBACK
        INIT_LIST_HEAD(&memcg->cgwb_list);
 #endif
+       idr_replace(&mem_cgroup_idr, memcg, memcg->id.id);
        return memcg;
 fail:
+       if (memcg->id.id > 0)
+               idr_remove(&mem_cgroup_idr, memcg->id.id);
        mem_cgroup_free(memcg);
        return NULL;
 }
@@ -4206,12 +4269,11 @@ fail:
        return ERR_PTR(-ENOMEM);
 }
 
-static int
-mem_cgroup_css_online(struct cgroup_subsys_state *css)
+static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 {
-       if (css->id > MEM_CGROUP_ID_MAX)
-               return -ENOSPC;
-
+       /* Online state pins memcg ID, memcg ID pins CSS */
+       mem_cgroup_id_get(mem_cgroup_from_css(css));
+       css_get(css);
        return 0;
 }
 
@@ -4234,6 +4296,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
 
        memcg_offline_kmem(memcg);
        wb_memcg_offline(memcg);
+
+       mem_cgroup_id_put(memcg);
 }
 
 static void mem_cgroup_css_released(struct cgroup_subsys_state *css)
@@ -5756,6 +5820,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
        if (!memcg)
                return;
 
+       mem_cgroup_id_get(memcg);
        oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg));
        VM_BUG_ON_PAGE(oldid, page);
        mem_cgroup_swap_statistics(memcg, true);
@@ -5774,6 +5839,9 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
        VM_BUG_ON(!irqs_disabled());
        mem_cgroup_charge_statistics(memcg, page, false, -1);
        memcg_check_events(memcg, page);
+
+       if (!mem_cgroup_is_root(memcg))
+               css_put(&memcg->css);
 }
 
 /*
@@ -5804,11 +5872,11 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
            !page_counter_try_charge(&memcg->swap, 1, &counter))
                return -ENOMEM;
 
+       mem_cgroup_id_get(memcg);
        oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg));
        VM_BUG_ON_PAGE(oldid, page);
        mem_cgroup_swap_statistics(memcg, true);
 
-       css_get(&memcg->css);
        return 0;
 }
 
@@ -5837,7 +5905,7 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry)
                                page_counter_uncharge(&memcg->memsw, 1);
                }
                mem_cgroup_swap_statistics(memcg, false);
-               css_put(&memcg->css);
+               mem_cgroup_id_put(memcg);
        }
        rcu_read_unlock();
 }
index cd1f29e..9e04681 100644 (file)
@@ -2399,8 +2399,7 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
                                 * Protected against the rmap code by
                                 * the page lock.
                                 */
-                               page_move_anon_rmap(compound_head(old_page),
-                                                   vma, address);
+                               page_move_anon_rmap(old_page, vma);
                        }
                        unlock_page(old_page);
                        return wp_page_reuse(mm, vma, address, page_table, ptl,
index 6903b69..8b3e134 100644 (file)
@@ -286,7 +286,9 @@ static inline void reset_deferred_meminit(pg_data_t *pgdat)
 /* Returns true if the struct page for the pfn is uninitialised */
 static inline bool __meminit early_page_uninitialised(unsigned long pfn)
 {
-       if (pfn >= NODE_DATA(early_pfn_to_nid(pfn))->first_deferred_pfn)
+       int nid = early_pfn_to_nid(pfn);
+
+       if (node_online(nid) && pfn >= NODE_DATA(nid)->first_deferred_pfn)
                return true;
 
        return false;
@@ -1273,7 +1275,7 @@ int __meminit early_pfn_to_nid(unsigned long pfn)
        spin_lock(&early_pfn_lock);
        nid = __early_pfn_to_nid(pfn, &early_pfnnid_cache);
        if (nid < 0)
-               nid = 0;
+               nid = first_online_node;
        spin_unlock(&early_pfn_lock);
 
        return nid;
index 0ea5d90..701b93f 100644 (file)
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1084,23 +1084,20 @@ EXPORT_SYMBOL_GPL(page_mkclean);
  * page_move_anon_rmap - move a page to our anon_vma
  * @page:      the page to move to our anon_vma
  * @vma:       the vma the page belongs to
- * @address:   the user virtual address mapped
  *
  * When a page belongs exclusively to one process after a COW event,
  * that page can be moved into the anon_vma that belongs to just that
  * process, so the rmap code will not search the parent or sibling
  * processes.
  */
-void page_move_anon_rmap(struct page *page,
-       struct vm_area_struct *vma, unsigned long address)
+void page_move_anon_rmap(struct page *page, struct vm_area_struct *vma)
 {
        struct anon_vma *anon_vma = vma->anon_vma;
 
+       page = compound_head(page);
+
        VM_BUG_ON_PAGE(!PageLocked(page), page);
        VM_BUG_ON_VMA(!anon_vma, vma);
-       if (IS_ENABLED(CONFIG_DEBUG_VM) && PageTransHuge(page))
-               address &= HPAGE_PMD_MASK;
-       VM_BUG_ON_PAGE(page->index != linear_page_index(vma, address), page);
 
        anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
        /*
@@ -1427,7 +1424,8 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
                        goto out;
        }
 
-       pte = page_check_address(page, mm, address, &ptl, 0);
+       pte = page_check_address(page, mm, address, &ptl,
+                                PageTransCompound(page));
        if (!pte)
                goto out;
 
index 24463b6..171dee7 100644 (file)
@@ -2225,9 +2225,11 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
                        error = shmem_getpage(inode, index, &page, SGP_FALLOC);
                if (error) {
                        /* Remove the !PageUptodate pages we added */
-                       shmem_undo_range(inode,
-                               (loff_t)start << PAGE_SHIFT,
-                               ((loff_t)index << PAGE_SHIFT) - 1, true);
+                       if (index > start) {
+                               shmem_undo_range(inode,
+                                   (loff_t)start << PAGE_SHIFT,
+                                   ((loff_t)index << PAGE_SHIFT) - 1, true);
+                       }
                        goto undone;
                }
 
index a65dad7..82317ab 100644 (file)
@@ -526,8 +526,8 @@ void memcg_create_kmem_cache(struct mem_cgroup *memcg,
                goto out_unlock;
 
        cgroup_name(css->cgroup, memcg_name_buf, sizeof(memcg_name_buf));
-       cache_name = kasprintf(GFP_KERNEL, "%s(%d:%s)", root_cache->name,
-                              css->id, memcg_name_buf);
+       cache_name = kasprintf(GFP_KERNEL, "%s(%llu:%s)", root_cache->name,
+                              css->serial_nr, memcg_name_buf);
        if (!cache_name)
                goto out_unlock;
 
index 8a75f8d..5772775 100644 (file)
@@ -491,7 +491,7 @@ static int __init workingset_init(void)
        max_order = fls_long(totalram_pages - 1);
        if (max_order > timestamp_bits)
                bucket_order = max_order - timestamp_bits;
-       printk("workingset: timestamp_bits=%d max_order=%d bucket_order=%u\n",
+       pr_info("workingset: timestamp_bits=%d max_order=%d bucket_order=%u\n",
               timestamp_bits, max_order, bucket_order);
 
        ret = list_lru_init_key(&workingset_shadow_nodes, &shadow_nodes_key);
index ae1d419..86450b7 100644 (file)
@@ -47,6 +47,9 @@ static int lowpan_ndisc_parse_options(const struct net_device *dev,
                                      struct nd_opt_hdr *nd_opt,
                                      struct ndisc_options *ndopts)
 {
+       if (!lowpan_is_ll(dev, LOWPAN_LLTYPE_IEEE802154))
+               return 0;
+
        switch (nd_opt->nd_opt_type) {
        case ND_OPT_SOURCE_LL_ADDR:
        case ND_OPT_TARGET_LL_ADDR:
@@ -94,10 +97,13 @@ static void lowpan_ndisc_802154_update(struct neighbour *n, u32 flags,
        }
 
        write_lock_bh(&n->lock);
-       if (lladdr_short)
+       if (lladdr_short) {
                ieee802154_be16_to_le16(&neigh->short_addr, lladdr_short);
-       else
+               if (!lowpan_802154_is_valid_src_short_addr(neigh->short_addr))
+                       neigh->short_addr = cpu_to_le16(IEEE802154_ADDR_SHORT_UNSPEC);
+       } else {
                neigh->short_addr = cpu_to_le16(IEEE802154_ADDR_SHORT_UNSPEC);
+       }
        write_unlock_bh(&n->lock);
 }
 
@@ -135,8 +141,9 @@ static int lowpan_ndisc_opt_addr_space(const struct net_device *dev,
                        read_unlock_bh(&neigh->lock);
                        addr_space += __ndisc_opt_addr_space(IEEE802154_SHORT_ADDR_LEN, 0);
                        *ha = ha_buf;
+               } else {
+                       read_unlock_bh(&neigh->lock);
                }
-               read_unlock_bh(&neigh->lock);
                break;
        case NDISC_NEIGHBOUR_ADVERTISEMENT:
        case NDISC_NEIGHBOUR_SOLICITATION:
index c8f422c..fbfacd5 100644 (file)
@@ -146,10 +146,12 @@ static netdev_tx_t vlan_dev_hard_start_xmit(struct sk_buff *skb,
 
 static int vlan_dev_change_mtu(struct net_device *dev, int new_mtu)
 {
-       /* TODO: gotta make sure the underlying layer can handle it,
-        * maybe an IFF_VLAN_CAPABLE flag for devices?
-        */
-       if (vlan_dev_priv(dev)->real_dev->mtu < new_mtu)
+       struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
+       unsigned int max_mtu = real_dev->mtu;
+
+       if (netif_reduces_vlan_mtu(real_dev))
+               max_mtu -= VLAN_HLEN;
+       if (max_mtu < new_mtu)
                return -ERANGE;
 
        dev->mtu = new_mtu;
index c92b52f..1270207 100644 (file)
@@ -118,6 +118,7 @@ static int vlan_newlink(struct net *src_net, struct net_device *dev,
 {
        struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
        struct net_device *real_dev;
+       unsigned int max_mtu;
        __be16 proto;
        int err;
 
@@ -144,9 +145,11 @@ static int vlan_newlink(struct net *src_net, struct net_device *dev,
        if (err < 0)
                return err;
 
+       max_mtu = netif_reduces_vlan_mtu(real_dev) ? real_dev->mtu - VLAN_HLEN :
+                                                    real_dev->mtu;
        if (!tb[IFLA_MTU])
-               dev->mtu = real_dev->mtu;
-       else if (dev->mtu > real_dev->mtu)
+               dev->mtu = max_mtu;
+       else if (dev->mtu > max_mtu)
                return -EINVAL;
 
        err = vlan_changelink(dev, tb, data);
index ff40562..c2cdbce 100644 (file)
@@ -237,6 +237,7 @@ source "net/hsr/Kconfig"
 source "net/switchdev/Kconfig"
 source "net/l3mdev/Kconfig"
 source "net/qrtr/Kconfig"
+source "net/ncsi/Kconfig"
 
 config RPS
        bool
index bdd1455..9bd20bb 100644 (file)
@@ -79,3 +79,4 @@ ifneq ($(CONFIG_NET_L3_MASTER_DEV),)
 obj-y                          += l3mdev/
 endif
 obj-$(CONFIG_QRTR)             += qrtr/
+obj-$(CONFIG_NET_NCSI)         += ncsi/
index e4f7494..ad2ffe1 100644 (file)
@@ -178,10 +178,21 @@ static void batadv_backbone_gw_put(struct batadv_bla_backbone_gw *backbone_gw)
 static void batadv_claim_release(struct kref *ref)
 {
        struct batadv_bla_claim *claim;
+       struct batadv_bla_backbone_gw *old_backbone_gw;
 
        claim = container_of(ref, struct batadv_bla_claim, refcount);
 
-       batadv_backbone_gw_put(claim->backbone_gw);
+       spin_lock_bh(&claim->backbone_lock);
+       old_backbone_gw = claim->backbone_gw;
+       claim->backbone_gw = NULL;
+       spin_unlock_bh(&claim->backbone_lock);
+
+       spin_lock_bh(&old_backbone_gw->crc_lock);
+       old_backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN);
+       spin_unlock_bh(&old_backbone_gw->crc_lock);
+
+       batadv_backbone_gw_put(old_backbone_gw);
+
        kfree_rcu(claim, rcu);
 }
 
@@ -419,9 +430,12 @@ static void batadv_bla_send_claim(struct batadv_priv *bat_priv, u8 *mac,
                break;
        }
 
-       if (vid & BATADV_VLAN_HAS_TAG)
+       if (vid & BATADV_VLAN_HAS_TAG) {
                skb = vlan_insert_tag(skb, htons(ETH_P_8021Q),
                                      vid & VLAN_VID_MASK);
+               if (!skb)
+                       goto out;
+       }
 
        skb_reset_mac_header(skb);
        skb->protocol = eth_type_trans(skb, soft_iface);
@@ -675,8 +689,10 @@ static void batadv_bla_add_claim(struct batadv_priv *bat_priv,
                                 const u8 *mac, const unsigned short vid,
                                 struct batadv_bla_backbone_gw *backbone_gw)
 {
+       struct batadv_bla_backbone_gw *old_backbone_gw;
        struct batadv_bla_claim *claim;
        struct batadv_bla_claim search_claim;
+       bool remove_crc = false;
        int hash_added;
 
        ether_addr_copy(search_claim.addr, mac);
@@ -690,8 +706,10 @@ static void batadv_bla_add_claim(struct batadv_priv *bat_priv,
                        return;
 
                ether_addr_copy(claim->addr, mac);
+               spin_lock_init(&claim->backbone_lock);
                claim->vid = vid;
                claim->lasttime = jiffies;
+               kref_get(&backbone_gw->refcount);
                claim->backbone_gw = backbone_gw;
 
                kref_init(&claim->refcount);
@@ -719,15 +737,26 @@ static void batadv_bla_add_claim(struct batadv_priv *bat_priv,
                           "bla_add_claim(): changing ownership for %pM, vid %d\n",
                           mac, BATADV_PRINT_VID(vid));
 
-               spin_lock_bh(&claim->backbone_gw->crc_lock);
-               claim->backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN);
-               spin_unlock_bh(&claim->backbone_gw->crc_lock);
-               batadv_backbone_gw_put(claim->backbone_gw);
+               remove_crc = true;
        }
-       /* set (new) backbone gw */
+
+       /* replace backbone_gw atomically and adjust reference counters */
+       spin_lock_bh(&claim->backbone_lock);
+       old_backbone_gw = claim->backbone_gw;
        kref_get(&backbone_gw->refcount);
        claim->backbone_gw = backbone_gw;
+       spin_unlock_bh(&claim->backbone_lock);
 
+       if (remove_crc) {
+               /* remove claim address from old backbone_gw */
+               spin_lock_bh(&old_backbone_gw->crc_lock);
+               old_backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN);
+               spin_unlock_bh(&old_backbone_gw->crc_lock);
+       }
+
+       batadv_backbone_gw_put(old_backbone_gw);
+
+       /* add claim address to new backbone_gw */
        spin_lock_bh(&backbone_gw->crc_lock);
        backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN);
        spin_unlock_bh(&backbone_gw->crc_lock);
@@ -737,6 +766,26 @@ claim_free_ref:
        batadv_claim_put(claim);
 }
 
+/**
+ * batadv_bla_claim_get_backbone_gw - Get valid reference for backbone_gw of
+ *  claim
+ * @claim: claim whose backbone_gw should be returned
+ *
+ * Return: valid reference to claim::backbone_gw
+ */
+static struct batadv_bla_backbone_gw *
+batadv_bla_claim_get_backbone_gw(struct batadv_bla_claim *claim)
+{
+       struct batadv_bla_backbone_gw *backbone_gw;
+
+       spin_lock_bh(&claim->backbone_lock);
+       backbone_gw = claim->backbone_gw;
+       kref_get(&backbone_gw->refcount);
+       spin_unlock_bh(&claim->backbone_lock);
+
+       return backbone_gw;
+}
+
 /**
  * batadv_bla_del_claim - delete a claim from the claim hash
  * @bat_priv: the bat priv with all the soft interface information
@@ -761,10 +810,6 @@ static void batadv_bla_del_claim(struct batadv_priv *bat_priv,
                           batadv_choose_claim, claim);
        batadv_claim_put(claim); /* reference from the hash is gone */
 
-       spin_lock_bh(&claim->backbone_gw->crc_lock);
-       claim->backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN);
-       spin_unlock_bh(&claim->backbone_gw->crc_lock);
-
        /* don't need the reference from hash_find() anymore */
        batadv_claim_put(claim);
 }
@@ -1217,6 +1262,7 @@ static void batadv_bla_purge_claims(struct batadv_priv *bat_priv,
                                    struct batadv_hard_iface *primary_if,
                                    int now)
 {
+       struct batadv_bla_backbone_gw *backbone_gw;
        struct batadv_bla_claim *claim;
        struct hlist_head *head;
        struct batadv_hashtable *hash;
@@ -1231,14 +1277,17 @@ static void batadv_bla_purge_claims(struct batadv_priv *bat_priv,
 
                rcu_read_lock();
                hlist_for_each_entry_rcu(claim, head, hash_entry) {
+                       backbone_gw = batadv_bla_claim_get_backbone_gw(claim);
                        if (now)
                                goto purge_now;
-                       if (!batadv_compare_eth(claim->backbone_gw->orig,
+
+                       if (!batadv_compare_eth(backbone_gw->orig,
                                                primary_if->net_dev->dev_addr))
-                               continue;
+                               goto skip;
+
                        if (!batadv_has_timed_out(claim->lasttime,
                                                  BATADV_BLA_CLAIM_TIMEOUT))
-                               continue;
+                               goto skip;
 
                        batadv_dbg(BATADV_DBG_BLA, bat_priv,
                                   "bla_purge_claims(): %pM, vid %d, time out\n",
@@ -1246,8 +1295,10 @@ static void batadv_bla_purge_claims(struct batadv_priv *bat_priv,
 
 purge_now:
                        batadv_handle_unclaim(bat_priv, primary_if,
-                                             claim->backbone_gw->orig,
+                                             backbone_gw->orig,
                                              claim->addr, claim->vid);
+skip:
+                       batadv_backbone_gw_put(backbone_gw);
                }
                rcu_read_unlock();
        }
@@ -1758,9 +1809,11 @@ batadv_bla_loopdetect_check(struct batadv_priv *bat_priv, struct sk_buff *skb,
 bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,
                   unsigned short vid, bool is_bcast)
 {
+       struct batadv_bla_backbone_gw *backbone_gw;
        struct ethhdr *ethhdr;
        struct batadv_bla_claim search_claim, *claim = NULL;
        struct batadv_hard_iface *primary_if;
+       bool own_claim;
        bool ret;
 
        ethhdr = eth_hdr(skb);
@@ -1795,8 +1848,12 @@ bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,
        }
 
        /* if it is our own claim ... */
-       if (batadv_compare_eth(claim->backbone_gw->orig,
-                              primary_if->net_dev->dev_addr)) {
+       backbone_gw = batadv_bla_claim_get_backbone_gw(claim);
+       own_claim = batadv_compare_eth(backbone_gw->orig,
+                                      primary_if->net_dev->dev_addr);
+       batadv_backbone_gw_put(backbone_gw);
+
+       if (own_claim) {
                /* ... allow it in any case */
                claim->lasttime = jiffies;
                goto allow;
@@ -1860,7 +1917,9 @@ bool batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb,
 {
        struct ethhdr *ethhdr;
        struct batadv_bla_claim search_claim, *claim = NULL;
+       struct batadv_bla_backbone_gw *backbone_gw;
        struct batadv_hard_iface *primary_if;
+       bool client_roamed;
        bool ret = false;
 
        primary_if = batadv_primary_if_get_selected(bat_priv);
@@ -1890,8 +1949,12 @@ bool batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb,
                goto allow;
 
        /* check if we are responsible. */
-       if (batadv_compare_eth(claim->backbone_gw->orig,
-                              primary_if->net_dev->dev_addr)) {
+       backbone_gw = batadv_bla_claim_get_backbone_gw(claim);
+       client_roamed = batadv_compare_eth(backbone_gw->orig,
+                                          primary_if->net_dev->dev_addr);
+       batadv_backbone_gw_put(backbone_gw);
+
+       if (client_roamed) {
                /* if yes, the client has roamed and we have
                 * to unclaim it.
                 */
@@ -1939,6 +2002,7 @@ int batadv_bla_claim_table_seq_print_text(struct seq_file *seq, void *offset)
        struct net_device *net_dev = (struct net_device *)seq->private;
        struct batadv_priv *bat_priv = netdev_priv(net_dev);
        struct batadv_hashtable *hash = bat_priv->bla.claim_hash;
+       struct batadv_bla_backbone_gw *backbone_gw;
        struct batadv_bla_claim *claim;
        struct batadv_hard_iface *primary_if;
        struct hlist_head *head;
@@ -1963,17 +2027,21 @@ int batadv_bla_claim_table_seq_print_text(struct seq_file *seq, void *offset)
 
                rcu_read_lock();
                hlist_for_each_entry_rcu(claim, head, hash_entry) {
-                       is_own = batadv_compare_eth(claim->backbone_gw->orig,
+                       backbone_gw = batadv_bla_claim_get_backbone_gw(claim);
+
+                       is_own = batadv_compare_eth(backbone_gw->orig,
                                                    primary_addr);
 
-                       spin_lock_bh(&claim->backbone_gw->crc_lock);
-                       backbone_crc = claim->backbone_gw->crc;
-                       spin_unlock_bh(&claim->backbone_gw->crc_lock);
+                       spin_lock_bh(&backbone_gw->crc_lock);
+                       backbone_crc = backbone_gw->crc;
+                       spin_unlock_bh(&backbone_gw->crc_lock);
                        seq_printf(seq, " * %pM on %5d by %pM [%c] (%#.4x)\n",
                                   claim->addr, BATADV_PRINT_VID(claim->vid),
-                                  claim->backbone_gw->orig,
+                                  backbone_gw->orig,
                                   (is_own ? 'x' : ' '),
                                   backbone_crc);
+
+                       batadv_backbone_gw_put(backbone_gw);
                }
                rcu_read_unlock();
        }
index fa76465..b1cc8bf 100644 (file)
@@ -1011,9 +1011,12 @@ bool batadv_dat_snoop_outgoing_arp_request(struct batadv_priv *bat_priv,
                if (!skb_new)
                        goto out;
 
-               if (vid & BATADV_VLAN_HAS_TAG)
+               if (vid & BATADV_VLAN_HAS_TAG) {
                        skb_new = vlan_insert_tag(skb_new, htons(ETH_P_8021Q),
                                                  vid & VLAN_VID_MASK);
+                       if (!skb_new)
+                               goto out;
+               }
 
                skb_reset_mac_header(skb_new);
                skb_new->protocol = eth_type_trans(skb_new,
@@ -1091,9 +1094,12 @@ bool batadv_dat_snoop_incoming_arp_request(struct batadv_priv *bat_priv,
         */
        skb_reset_mac_header(skb_new);
 
-       if (vid & BATADV_VLAN_HAS_TAG)
+       if (vid & BATADV_VLAN_HAS_TAG) {
                skb_new = vlan_insert_tag(skb_new, htons(ETH_P_8021Q),
                                          vid & VLAN_VID_MASK);
+               if (!skb_new)
+                       goto out;
+       }
 
        /* To preserve backwards compatibility, the node has choose the outgoing
         * format based on the incoming request packet type. The assumption is
index 7d1e542..3940b5d 100644 (file)
@@ -785,6 +785,8 @@ static void batadv_orig_node_release(struct kref *ref)
        struct batadv_neigh_node *neigh_node;
        struct batadv_orig_node *orig_node;
        struct batadv_orig_ifinfo *orig_ifinfo;
+       struct batadv_orig_node_vlan *vlan;
+       struct batadv_orig_ifinfo *last_candidate;
 
        orig_node = container_of(ref, struct batadv_orig_node, refcount);
 
@@ -802,8 +804,21 @@ static void batadv_orig_node_release(struct kref *ref)
                hlist_del_rcu(&orig_ifinfo->list);
                batadv_orig_ifinfo_put(orig_ifinfo);
        }
+
+       last_candidate = orig_node->last_bonding_candidate;
+       orig_node->last_bonding_candidate = NULL;
        spin_unlock_bh(&orig_node->neigh_list_lock);
 
+       if (last_candidate)
+               batadv_orig_ifinfo_put(last_candidate);
+
+       spin_lock_bh(&orig_node->vlan_list_lock);
+       hlist_for_each_entry_safe(vlan, node_tmp, &orig_node->vlan_list, list) {
+               hlist_del_rcu(&vlan->list);
+               batadv_orig_node_vlan_put(vlan);
+       }
+       spin_unlock_bh(&orig_node->vlan_list_lock);
+
        /* Free nc_nodes */
        batadv_nc_purge_orig(orig_node->bat_priv, orig_node, NULL);
 
index af8e119..7602c00 100644 (file)
@@ -469,6 +469,29 @@ static int batadv_check_unicast_packet(struct batadv_priv *bat_priv,
        return 0;
 }
 
+/**
+ * batadv_last_bonding_replace - Replace last_bonding_candidate of orig_node
+ * @orig_node: originator node whose bonding candidates should be replaced
+ * @new_candidate: new bonding candidate or NULL
+ */
+static void
+batadv_last_bonding_replace(struct batadv_orig_node *orig_node,
+                           struct batadv_orig_ifinfo *new_candidate)
+{
+       struct batadv_orig_ifinfo *old_candidate;
+
+       spin_lock_bh(&orig_node->neigh_list_lock);
+       old_candidate = orig_node->last_bonding_candidate;
+
+       if (new_candidate)
+               kref_get(&new_candidate->refcount);
+       orig_node->last_bonding_candidate = new_candidate;
+       spin_unlock_bh(&orig_node->neigh_list_lock);
+
+       if (old_candidate)
+               batadv_orig_ifinfo_put(old_candidate);
+}
+
 /**
  * batadv_find_router - find a suitable router for this originator
  * @bat_priv: the bat priv with all the soft interface information
@@ -576,10 +599,6 @@ next:
        }
        rcu_read_unlock();
 
-       /* last_bonding_candidate is reset below, remove the old reference. */
-       if (orig_node->last_bonding_candidate)
-               batadv_orig_ifinfo_put(orig_node->last_bonding_candidate);
-
        /* After finding candidates, handle the three cases:
         * 1) there is a next candidate, use that
         * 2) there is no next candidate, use the first of the list
@@ -588,21 +607,28 @@ next:
        if (next_candidate) {
                batadv_neigh_node_put(router);
 
-               /* remove references to first candidate, we don't need it. */
-               if (first_candidate) {
-                       batadv_neigh_node_put(first_candidate_router);
-                       batadv_orig_ifinfo_put(first_candidate);
-               }
+               kref_get(&next_candidate_router->refcount);
                router = next_candidate_router;
-               orig_node->last_bonding_candidate = next_candidate;
+               batadv_last_bonding_replace(orig_node, next_candidate);
        } else if (first_candidate) {
                batadv_neigh_node_put(router);
 
-               /* refcounting has already been done in the loop above. */
+               kref_get(&first_candidate_router->refcount);
                router = first_candidate_router;
-               orig_node->last_bonding_candidate = first_candidate;
+               batadv_last_bonding_replace(orig_node, first_candidate);
        } else {
-               orig_node->last_bonding_candidate = NULL;
+               batadv_last_bonding_replace(orig_node, NULL);
+       }
+
+       /* cleanup of candidates */
+       if (first_candidate) {
+               batadv_neigh_node_put(first_candidate_router);
+               batadv_orig_ifinfo_put(first_candidate);
+       }
+
+       if (next_candidate) {
+               batadv_neigh_node_put(next_candidate_router);
+               batadv_orig_ifinfo_put(next_candidate);
        }
 
        return router;
index 3a10d87..6191159 100644 (file)
@@ -435,8 +435,8 @@ int batadv_send_skb_via_gw(struct batadv_priv *bat_priv, struct sk_buff *skb,
        struct batadv_orig_node *orig_node;
 
        orig_node = batadv_gw_get_selected_orig(bat_priv);
-       return batadv_send_skb_unicast(bat_priv, skb, BATADV_UNICAST, 0,
-                                      orig_node, vid);
+       return batadv_send_skb_unicast(bat_priv, skb, BATADV_UNICAST_4ADDR,
+                                      BATADV_P_DATA, orig_node, vid);
 }
 
 void batadv_forw_packet_free(struct batadv_forw_packet *forw_packet)
index 43db7b6..a64522c 100644 (file)
@@ -331,7 +331,9 @@ struct batadv_orig_node {
        DECLARE_BITMAP(bcast_bits, BATADV_TQ_LOCAL_WINDOW_SIZE);
        u32 last_bcast_seqno;
        struct hlist_head neigh_list;
-       /* neigh_list_lock protects: neigh_list and router */
+       /* neigh_list_lock protects: neigh_list, ifinfo_list,
+        * last_bonding_candidate and router
+        */
        spinlock_t neigh_list_lock;
        struct hlist_node hash_entry;
        struct batadv_priv *bat_priv;
@@ -1171,6 +1173,7 @@ struct batadv_bla_backbone_gw {
  * @addr: mac address of claimed non-mesh client
  * @vid: vlan id this client was detected on
  * @backbone_gw: pointer to backbone gw claiming this client
+ * @backbone_lock: lock protecting backbone_gw pointer
  * @lasttime: last time we heard of claim (locals only)
  * @hash_entry: hlist node for batadv_priv_bla::claim_hash
  * @refcount: number of contexts the object is used
@@ -1180,6 +1183,7 @@ struct batadv_bla_claim {
        u8 addr[ETH_ALEN];
        unsigned short vid;
        struct batadv_bla_backbone_gw *backbone_gw;
+       spinlock_t backbone_lock; /* protects backbone_gw */
        unsigned long lasttime;
        struct hlist_node hash_entry;
        struct rcu_head rcu;
index 3df7aef..ece45e0 100644 (file)
@@ -215,6 +215,7 @@ int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
        struct sock *sk = sock->sk;
        struct sk_buff *skb;
        size_t copied;
+       size_t skblen;
        int err;
 
        BT_DBG("sock %p sk %p len %zu", sock, sk, len);
@@ -230,6 +231,7 @@ int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
                return err;
        }
 
+       skblen = skb->len;
        copied = skb->len;
        if (len < copied) {
                msg->msg_flags |= MSG_TRUNC;
@@ -248,6 +250,9 @@ int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
 
        skb_free_datagram(sk, skb);
 
+       if (msg->msg_flags & MSG_TRUNC)
+               copied = skblen;
+
        return err ? : copied;
 }
 EXPORT_SYMBOL(bt_sock_recvmsg);
index bf9f8a8..3809617 100644 (file)
@@ -625,7 +625,7 @@ struct hci_dev *hci_get_route(bdaddr_t *dst, bdaddr_t *src)
        list_for_each_entry(d, &hci_dev_list, list) {
                if (!test_bit(HCI_UP, &d->flags) ||
                    hci_dev_test_flag(d, HCI_USER_CHANNEL) ||
-                   d->dev_type != HCI_BREDR)
+                   d->dev_type != HCI_PRIMARY)
                        continue;
 
                /* Simple routing:
index 45a9fc6..ddf8432 100644 (file)
@@ -260,14 +260,12 @@ static int hci_init1_req(struct hci_request *req, unsigned long opt)
                hci_reset_req(req, 0);
 
        switch (hdev->dev_type) {
-       case HCI_BREDR:
+       case HCI_PRIMARY:
                bredr_init(req);
                break;
-
        case HCI_AMP:
                amp_init1(req);
                break;
-
        default:
                BT_ERR("Unknown device type %d", hdev->dev_type);
                break;
@@ -791,11 +789,11 @@ static int __hci_init(struct hci_dev *hdev)
        if (err < 0)
                return err;
 
-       /* HCI_BREDR covers both single-mode LE, BR/EDR and dual-mode
+       /* HCI_PRIMARY covers both single-mode LE, BR/EDR and dual-mode
         * BR/EDR/LE type controllers. AMP controllers only need the
         * first two stages of init.
         */
-       if (hdev->dev_type != HCI_BREDR)
+       if (hdev->dev_type != HCI_PRIMARY)
                return 0;
 
        err = __hci_req_sync(hdev, hci_init3_req, 0, HCI_INIT_TIMEOUT, NULL);
@@ -1202,7 +1200,7 @@ int hci_inquiry(void __user *arg)
                goto done;
        }
 
-       if (hdev->dev_type != HCI_BREDR) {
+       if (hdev->dev_type != HCI_PRIMARY) {
                err = -EOPNOTSUPP;
                goto done;
        }
@@ -1307,7 +1305,7 @@ static int hci_dev_do_open(struct hci_dev *hdev)
                 * since AMP controllers do not have an address.
                 */
                if (!hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
-                   hdev->dev_type == HCI_BREDR &&
+                   hdev->dev_type == HCI_PRIMARY &&
                    !bacmp(&hdev->bdaddr, BDADDR_ANY) &&
                    !bacmp(&hdev->static_addr, BDADDR_ANY)) {
                        ret = -EADDRNOTAVAIL;
@@ -1402,7 +1400,7 @@ static int hci_dev_do_open(struct hci_dev *hdev)
                    !hci_dev_test_flag(hdev, HCI_UNCONFIGURED) &&
                    !hci_dev_test_flag(hdev, HCI_USER_CHANNEL) &&
                    hci_dev_test_flag(hdev, HCI_MGMT) &&
-                   hdev->dev_type == HCI_BREDR) {
+                   hdev->dev_type == HCI_PRIMARY) {
                        ret = __hci_req_hci_power_on(hdev);
                        mgmt_power_on(hdev, ret);
                }
@@ -1563,7 +1561,7 @@ int hci_dev_do_close(struct hci_dev *hdev)
 
        auto_off = hci_dev_test_and_clear_flag(hdev, HCI_AUTO_OFF);
 
-       if (!auto_off && hdev->dev_type == HCI_BREDR &&
+       if (!auto_off && hdev->dev_type == HCI_PRIMARY &&
            hci_dev_test_flag(hdev, HCI_MGMT))
                __mgmt_power_off(hdev);
 
@@ -1802,7 +1800,7 @@ int hci_dev_cmd(unsigned int cmd, void __user *arg)
                goto done;
        }
 
-       if (hdev->dev_type != HCI_BREDR) {
+       if (hdev->dev_type != HCI_PRIMARY) {
                err = -EOPNOTSUPP;
                goto done;
        }
@@ -2043,7 +2041,7 @@ static void hci_power_on(struct work_struct *work)
         */
        if (hci_dev_test_flag(hdev, HCI_RFKILLED) ||
            hci_dev_test_flag(hdev, HCI_UNCONFIGURED) ||
-           (hdev->dev_type == HCI_BREDR &&
+           (hdev->dev_type == HCI_PRIMARY &&
             !bacmp(&hdev->bdaddr, BDADDR_ANY) &&
             !bacmp(&hdev->static_addr, BDADDR_ANY))) {
                hci_dev_clear_flag(hdev, HCI_AUTO_OFF);
@@ -3030,7 +3028,7 @@ int hci_register_dev(struct hci_dev *hdev)
         * so the index can be used as the AMP controller ID.
         */
        switch (hdev->dev_type) {
-       case HCI_BREDR:
+       case HCI_PRIMARY:
                id = ida_simple_get(&hci_index_ida, 0, 0, GFP_KERNEL);
                break;
        case HCI_AMP:
@@ -3090,7 +3088,7 @@ int hci_register_dev(struct hci_dev *hdev)
        hci_dev_set_flag(hdev, HCI_SETUP);
        hci_dev_set_flag(hdev, HCI_AUTO_OFF);
 
-       if (hdev->dev_type == HCI_BREDR) {
+       if (hdev->dev_type == HCI_PRIMARY) {
                /* Assume BR/EDR support until proven otherwise (such as
                 * through reading supported features during init.
                 */
@@ -3165,6 +3163,8 @@ void hci_unregister_dev(struct hci_dev *hdev)
        device_del(&hdev->dev);
 
        debugfs_remove_recursive(hdev->debugfs);
+       kfree_const(hdev->hw_info);
+       kfree_const(hdev->fw_info);
 
        destroy_workqueue(hdev->workqueue);
        destroy_workqueue(hdev->req_workqueue);
@@ -3268,6 +3268,28 @@ int hci_recv_diag(struct hci_dev *hdev, struct sk_buff *skb)
 }
 EXPORT_SYMBOL(hci_recv_diag);
 
+void hci_set_hw_info(struct hci_dev *hdev, const char *fmt, ...)
+{
+       va_list vargs;
+
+       va_start(vargs, fmt);
+       kfree_const(hdev->hw_info);
+       hdev->hw_info = kvasprintf_const(GFP_KERNEL, fmt, vargs);
+       va_end(vargs);
+}
+EXPORT_SYMBOL(hci_set_hw_info);
+
+void hci_set_fw_info(struct hci_dev *hdev, const char *fmt, ...)
+{
+       va_list vargs;
+
+       va_start(vargs, fmt);
+       kfree_const(hdev->fw_info);
+       hdev->fw_info = kvasprintf_const(GFP_KERNEL, fmt, vargs);
+       va_end(vargs);
+}
+EXPORT_SYMBOL(hci_set_fw_info);
+
 /* ---- Interface to upper protocols ---- */
 
 int hci_register_cb(struct hci_cb *cb)
@@ -3415,7 +3437,7 @@ static void hci_queue_acl(struct hci_chan *chan, struct sk_buff_head *queue,
        hci_skb_pkt_type(skb) = HCI_ACLDATA_PKT;
 
        switch (hdev->dev_type) {
-       case HCI_BREDR:
+       case HCI_PRIMARY:
                hci_add_acl_hdr(skb, conn->handle, flags);
                break;
        case HCI_AMP:
@@ -3826,7 +3848,7 @@ static void hci_sched_acl(struct hci_dev *hdev)
        BT_DBG("%s", hdev->name);
 
        /* No ACL link over BR/EDR controller */
-       if (!hci_conn_num(hdev, ACL_LINK) && hdev->dev_type == HCI_BREDR)
+       if (!hci_conn_num(hdev, ACL_LINK) && hdev->dev_type == HCI_PRIMARY)
                return;
 
        /* No AMP link over AMP controller */
index 7db4220..63df63e 100644 (file)
@@ -76,6 +76,30 @@ static const struct file_operations __name ## _fops = {                            \
        .llseek         = default_llseek,                                     \
 }                                                                            \
 
+#define DEFINE_INFO_ATTRIBUTE(__name, __field)                               \
+static int __name ## _show(struct seq_file *f, void *ptr)                    \
+{                                                                            \
+       struct hci_dev *hdev = f->private;                                    \
+                                                                             \
+       hci_dev_lock(hdev);                                                   \
+       seq_printf(f, "%s\n", hdev->__field ? : "");                          \
+       hci_dev_unlock(hdev);                                                 \
+                                                                             \
+       return 0;                                                             \
+}                                                                            \
+                                                                             \
+static int __name ## _open(struct inode *inode, struct file *file)           \
+{                                                                            \
+       return single_open(file, __name ## _show, inode->i_private);          \
+}                                                                            \
+                                                                             \
+static const struct file_operations __name ## _fops = {                              \
+       .open           = __name ## _open,                                    \
+       .read           = seq_read,                                           \
+       .llseek         = seq_lseek,                                          \
+       .release        = single_release,                                     \
+}                                                                            \
+
 static int features_show(struct seq_file *f, void *ptr)
 {
        struct hci_dev *hdev = f->private;
@@ -349,6 +373,9 @@ static const struct file_operations sc_only_mode_fops = {
        .llseek         = default_llseek,
 };
 
+DEFINE_INFO_ATTRIBUTE(hardware_info, hw_info);
+DEFINE_INFO_ATTRIBUTE(firmware_info, fw_info);
+
 void hci_debugfs_create_common(struct hci_dev *hdev)
 {
        debugfs_create_file("features", 0444, hdev->debugfs, hdev,
@@ -382,6 +409,14 @@ void hci_debugfs_create_common(struct hci_dev *hdev)
        if (lmp_sc_capable(hdev) || lmp_le_capable(hdev))
                debugfs_create_file("sc_only_mode", 0444, hdev->debugfs,
                                    hdev, &sc_only_mode_fops);
+
+       if (hdev->hw_info)
+               debugfs_create_file("hardware_info", 0444, hdev->debugfs,
+                                   hdev, &hardware_info_fops);
+
+       if (hdev->fw_info)
+               debugfs_create_file("firmware_info", 0444, hdev->debugfs,
+                                   hdev, &firmware_info_fops);
 }
 
 static int inquiry_cache_show(struct seq_file *f, void *p)
index d4b3dd5..e17aacb 100644 (file)
@@ -2332,7 +2332,7 @@ static u8 hci_to_mgmt_reason(u8 err)
 static void hci_disconn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
 {
        struct hci_ev_disconn_complete *ev = (void *) skb->data;
-       u8 reason = hci_to_mgmt_reason(ev->reason);
+       u8 reason;
        struct hci_conn_params *params;
        struct hci_conn *conn;
        bool mgmt_connected;
@@ -2355,6 +2355,12 @@ static void hci_disconn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
        conn->state = BT_CLOSED;
 
        mgmt_connected = test_and_clear_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags);
+
+       if (test_bit(HCI_CONN_AUTH_FAILURE, &conn->flags))
+               reason = MGMT_DEV_DISCONN_AUTH_FAILURE;
+       else
+               reason = hci_to_mgmt_reason(ev->reason);
+
        mgmt_device_disconnected(hdev, &conn->dst, conn->type, conn->dst_type,
                                reason, mgmt_connected);
 
@@ -2421,6 +2427,8 @@ static void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
                goto unlock;
 
        if (!ev->status) {
+               clear_bit(HCI_CONN_AUTH_FAILURE, &conn->flags);
+
                if (!hci_conn_ssp_enabled(conn) &&
                    test_bit(HCI_CONN_REAUTH_PEND, &conn->flags)) {
                        BT_INFO("re-auth of legacy device is not possible.");
@@ -2429,6 +2437,9 @@ static void hci_auth_complete_evt(struct hci_dev *hdev, struct sk_buff *skb)
                        conn->sec_level = conn->pending_sec_level;
                }
        } else {
+               if (ev->status == HCI_ERROR_PIN_OR_KEY_MISSING)
+                       set_bit(HCI_CONN_AUTH_FAILURE, &conn->flags);
+
                mgmt_auth_failed(conn, ev->status);
        }
 
@@ -2613,6 +2624,9 @@ static void hci_encrypt_change_evt(struct hci_dev *hdev, struct sk_buff *skb)
        clear_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags);
 
        if (ev->status && conn->state == BT_CONNECTED) {
+               if (ev->status == HCI_ERROR_PIN_OR_KEY_MISSING)
+                       set_bit(HCI_CONN_AUTH_FAILURE, &conn->flags);
+
                hci_disconnect(conn, HCI_ERROR_AUTH_FAILURE);
                hci_conn_drop(conn);
                goto unlock;
@@ -3249,7 +3263,7 @@ static struct hci_conn *__hci_conn_lookup_handle(struct hci_dev *hdev,
        struct hci_chan *chan;
 
        switch (hdev->dev_type) {
-       case HCI_BREDR:
+       case HCI_PRIMARY:
                return hci_conn_hash_lookup_handle(hdev, handle);
        case HCI_AMP:
                chan = hci_chan_lookup_handle(hdev, handle);
index 1298d72..6ef8a01 100644 (file)
@@ -676,7 +676,7 @@ static int hci_sock_bound_ioctl(struct sock *sk, unsigned int cmd,
        if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED))
                return -EOPNOTSUPP;
 
-       if (hdev->dev_type != HCI_BREDR)
+       if (hdev->dev_type != HCI_PRIMARY)
                return -EOPNOTSUPP;
 
        switch (cmd) {
@@ -1048,6 +1048,7 @@ static int hci_sock_recvmsg(struct socket *sock, struct msghdr *msg,
        struct sock *sk = sock->sk;
        struct sk_buff *skb;
        int copied, err;
+       unsigned int skblen;
 
        BT_DBG("sock %p, sk %p", sock, sk);
 
@@ -1064,6 +1065,7 @@ static int hci_sock_recvmsg(struct socket *sock, struct msghdr *msg,
        if (!skb)
                return err;
 
+       skblen = skb->len;
        copied = skb->len;
        if (len < copied) {
                msg->msg_flags |= MSG_TRUNC;
@@ -1089,6 +1091,9 @@ static int hci_sock_recvmsg(struct socket *sock, struct msghdr *msg,
 
        skb_free_datagram(sk, skb);
 
+       if (msg->msg_flags & MSG_TRUNC)
+               copied = skblen;
+
        return err ? : copied;
 }
 
index 555982a..ca7a35e 100644 (file)
@@ -7,50 +7,6 @@
 
 static struct class *bt_class;
 
-static inline char *link_typetostr(int type)
-{
-       switch (type) {
-       case ACL_LINK:
-               return "ACL";
-       case SCO_LINK:
-               return "SCO";
-       case ESCO_LINK:
-               return "eSCO";
-       case LE_LINK:
-               return "LE";
-       default:
-               return "UNKNOWN";
-       }
-}
-
-static ssize_t show_link_type(struct device *dev,
-                             struct device_attribute *attr, char *buf)
-{
-       struct hci_conn *conn = to_hci_conn(dev);
-       return sprintf(buf, "%s\n", link_typetostr(conn->type));
-}
-
-static ssize_t show_link_address(struct device *dev,
-                                struct device_attribute *attr, char *buf)
-{
-       struct hci_conn *conn = to_hci_conn(dev);
-       return sprintf(buf, "%pMR\n", &conn->dst);
-}
-
-#define LINK_ATTR(_name, _mode, _show, _store) \
-struct device_attribute link_attr_##_name = __ATTR(_name, _mode, _show, _store)
-
-static LINK_ATTR(type, S_IRUGO, show_link_type, NULL);
-static LINK_ATTR(address, S_IRUGO, show_link_address, NULL);
-
-static struct attribute *bt_link_attrs[] = {
-       &link_attr_type.attr,
-       &link_attr_address.attr,
-       NULL
-};
-
-ATTRIBUTE_GROUPS(bt_link);
-
 static void bt_link_release(struct device *dev)
 {
        struct hci_conn *conn = to_hci_conn(dev);
@@ -59,7 +15,6 @@ static void bt_link_release(struct device *dev)
 
 static struct device_type bt_link = {
        .name    = "link",
-       .groups  = bt_link_groups,
        .release = bt_link_release,
 };
 
@@ -124,59 +79,6 @@ void hci_conn_del_sysfs(struct hci_conn *conn)
        hci_dev_put(hdev);
 }
 
-static inline char *host_typetostr(int type)
-{
-       switch (type) {
-       case HCI_BREDR:
-               return "BR/EDR";
-       case HCI_AMP:
-               return "AMP";
-       default:
-               return "UNKNOWN";
-       }
-}
-
-static ssize_t show_type(struct device *dev,
-                        struct device_attribute *attr, char *buf)
-{
-       struct hci_dev *hdev = to_hci_dev(dev);
-       return sprintf(buf, "%s\n", host_typetostr(hdev->dev_type));
-}
-
-static ssize_t show_name(struct device *dev,
-                        struct device_attribute *attr, char *buf)
-{
-       struct hci_dev *hdev = to_hci_dev(dev);
-       char name[HCI_MAX_NAME_LENGTH + 1];
-       int i;
-
-       for (i = 0; i < HCI_MAX_NAME_LENGTH; i++)
-               name[i] = hdev->dev_name[i];
-
-       name[HCI_MAX_NAME_LENGTH] = '\0';
-       return sprintf(buf, "%s\n", name);
-}
-
-static ssize_t show_address(struct device *dev,
-                           struct device_attribute *attr, char *buf)
-{
-       struct hci_dev *hdev = to_hci_dev(dev);
-       return sprintf(buf, "%pMR\n", &hdev->bdaddr);
-}
-
-static DEVICE_ATTR(type, S_IRUGO, show_type, NULL);
-static DEVICE_ATTR(name, S_IRUGO, show_name, NULL);
-static DEVICE_ATTR(address, S_IRUGO, show_address, NULL);
-
-static struct attribute *bt_host_attrs[] = {
-       &dev_attr_type.attr,
-       &dev_attr_name.attr,
-       &dev_attr_address.attr,
-       NULL
-};
-
-ATTRIBUTE_GROUPS(bt_host);
-
 static void bt_host_release(struct device *dev)
 {
        struct hci_dev *hdev = to_hci_dev(dev);
@@ -186,7 +88,6 @@ static void bt_host_release(struct device *dev)
 
 static struct device_type bt_host = {
        .name    = "host",
-       .groups  = bt_host_groups,
        .release = bt_host_release,
 };
 
index eb4f5f2..54ceb1f 100644 (file)
@@ -7468,7 +7468,7 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
        int len;
 
        /* For AMP controller do not create l2cap conn */
-       if (!conn && hcon->hdev->dev_type != HCI_BREDR)
+       if (!conn && hcon->hdev->dev_type != HCI_PRIMARY)
                goto drop;
 
        if (!conn)
index 388ee8b..1842141 100644 (file)
@@ -927,7 +927,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
                        break;
                }
 
-               if (get_user(opt, (u32 __user *) optval)) {
+               if (get_user(opt, (u16 __user *) optval)) {
                        err = -EFAULT;
                        break;
                }
index 9e4b931..7639290 100644 (file)
@@ -38,7 +38,7 @@
 #include "mgmt_util.h"
 
 #define MGMT_VERSION   1
-#define MGMT_REVISION  12
+#define MGMT_REVISION  13
 
 static const u16 mgmt_commands[] = {
        MGMT_OP_READ_INDEX_LIST,
@@ -359,7 +359,7 @@ static int read_index_list(struct sock *sk, struct hci_dev *hdev, void *data,
 
        count = 0;
        list_for_each_entry(d, &hci_dev_list, list) {
-               if (d->dev_type == HCI_BREDR &&
+               if (d->dev_type == HCI_PRIMARY &&
                    !hci_dev_test_flag(d, HCI_UNCONFIGURED))
                        count++;
        }
@@ -384,7 +384,7 @@ static int read_index_list(struct sock *sk, struct hci_dev *hdev, void *data,
                if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
                        continue;
 
-               if (d->dev_type == HCI_BREDR &&
+               if (d->dev_type == HCI_PRIMARY &&
                    !hci_dev_test_flag(d, HCI_UNCONFIGURED)) {
                        rp->index[count++] = cpu_to_le16(d->id);
                        BT_DBG("Added hci%u", d->id);
@@ -419,7 +419,7 @@ static int read_unconf_index_list(struct sock *sk, struct hci_dev *hdev,
 
        count = 0;
        list_for_each_entry(d, &hci_dev_list, list) {
-               if (d->dev_type == HCI_BREDR &&
+               if (d->dev_type == HCI_PRIMARY &&
                    hci_dev_test_flag(d, HCI_UNCONFIGURED))
                        count++;
        }
@@ -444,7 +444,7 @@ static int read_unconf_index_list(struct sock *sk, struct hci_dev *hdev,
                if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
                        continue;
 
-               if (d->dev_type == HCI_BREDR &&
+               if (d->dev_type == HCI_PRIMARY &&
                    hci_dev_test_flag(d, HCI_UNCONFIGURED)) {
                        rp->index[count++] = cpu_to_le16(d->id);
                        BT_DBG("Added hci%u", d->id);
@@ -479,7 +479,7 @@ static int read_ext_index_list(struct sock *sk, struct hci_dev *hdev,
 
        count = 0;
        list_for_each_entry(d, &hci_dev_list, list) {
-               if (d->dev_type == HCI_BREDR || d->dev_type == HCI_AMP)
+               if (d->dev_type == HCI_PRIMARY || d->dev_type == HCI_AMP)
                        count++;
        }
 
@@ -503,7 +503,7 @@ static int read_ext_index_list(struct sock *sk, struct hci_dev *hdev,
                if (test_bit(HCI_QUIRK_RAW_DEVICE, &d->quirks))
                        continue;
 
-               if (d->dev_type == HCI_BREDR) {
+               if (d->dev_type == HCI_PRIMARY) {
                        if (hci_dev_test_flag(d, HCI_UNCONFIGURED))
                                rp->entry[count].type = 0x01;
                        else
@@ -6366,7 +6366,7 @@ void mgmt_index_added(struct hci_dev *hdev)
                return;
 
        switch (hdev->dev_type) {
-       case HCI_BREDR:
+       case HCI_PRIMARY:
                if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
                        mgmt_index_event(MGMT_EV_UNCONF_INDEX_ADDED, hdev,
                                         NULL, 0, HCI_MGMT_UNCONF_INDEX_EVENTS);
@@ -6399,7 +6399,7 @@ void mgmt_index_removed(struct hci_dev *hdev)
                return;
 
        switch (hdev->dev_type) {
-       case HCI_BREDR:
+       case HCI_PRIMARY:
                mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status);
 
                if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
index 50976a6..4c1a16a 100644 (file)
@@ -22,9 +22,9 @@
 
 #include <linux/debugfs.h>
 #include <linux/scatterlist.h>
+#include <linux/crypto.h>
 #include <crypto/b128ops.h>
 #include <crypto/hash.h>
-#include <crypto/skcipher.h>
 
 #include <net/bluetooth/bluetooth.h>
 #include <net/bluetooth/hci_core.h>
@@ -88,7 +88,7 @@ struct smp_dev {
        u8                      min_key_size;
        u8                      max_key_size;
 
-       struct crypto_skcipher  *tfm_aes;
+       struct crypto_cipher    *tfm_aes;
        struct crypto_shash     *tfm_cmac;
 };
 
@@ -127,7 +127,7 @@ struct smp_chan {
        u8                      dhkey[32];
        u8                      mackey[16];
 
-       struct crypto_skcipher  *tfm_aes;
+       struct crypto_cipher    *tfm_aes;
        struct crypto_shash     *tfm_cmac;
 };
 
@@ -361,10 +361,8 @@ static int smp_h6(struct crypto_shash *tfm_cmac, const u8 w[16],
  * s1 and ah.
  */
 
-static int smp_e(struct crypto_skcipher *tfm, const u8 *k, u8 *r)
+static int smp_e(struct crypto_cipher *tfm, const u8 *k, u8 *r)
 {
-       SKCIPHER_REQUEST_ON_STACK(req, tfm);
-       struct scatterlist sg;
        uint8_t tmp[16], data[16];
        int err;
 
@@ -378,7 +376,7 @@ static int smp_e(struct crypto_skcipher *tfm, const u8 *k, u8 *r)
        /* The most significant octet of key corresponds to k[0] */
        swap_buf(k, tmp, 16);
 
-       err = crypto_skcipher_setkey(tfm, tmp, 16);
+       err = crypto_cipher_setkey(tfm, tmp, 16);
        if (err) {
                BT_ERR("cipher setkey failed: %d", err);
                return err;
@@ -387,16 +385,7 @@ static int smp_e(struct crypto_skcipher *tfm, const u8 *k, u8 *r)
        /* Most significant octet of plaintextData corresponds to data[0] */
        swap_buf(r, data, 16);
 
-       sg_init_one(&sg, data, 16);
-
-       skcipher_request_set_tfm(req, tfm);
-       skcipher_request_set_callback(req, 0, NULL, NULL);
-       skcipher_request_set_crypt(req, &sg, &sg, 16, NULL);
-
-       err = crypto_skcipher_encrypt(req);
-       skcipher_request_zero(req);
-       if (err)
-               BT_ERR("Encrypt data error %d", err);
+       crypto_cipher_encrypt_one(tfm, data, data);
 
        /* Most significant octet of encryptedData corresponds to data[0] */
        swap_buf(data, r, 16);
@@ -406,7 +395,7 @@ static int smp_e(struct crypto_skcipher *tfm, const u8 *k, u8 *r)
        return err;
 }
 
-static int smp_c1(struct crypto_skcipher *tfm_aes, const u8 k[16],
+static int smp_c1(struct crypto_cipher *tfm_aes, const u8 k[16],
                  const u8 r[16], const u8 preq[7], const u8 pres[7], u8 _iat,
                  const bdaddr_t *ia, u8 _rat, const bdaddr_t *ra, u8 res[16])
 {
@@ -455,7 +444,7 @@ static int smp_c1(struct crypto_skcipher *tfm_aes, const u8 k[16],
        return err;
 }
 
-static int smp_s1(struct crypto_skcipher *tfm_aes, const u8 k[16],
+static int smp_s1(struct crypto_cipher *tfm_aes, const u8 k[16],
                  const u8 r1[16], const u8 r2[16], u8 _r[16])
 {
        int err;
@@ -471,7 +460,7 @@ static int smp_s1(struct crypto_skcipher *tfm_aes, const u8 k[16],
        return err;
 }
 
-static int smp_ah(struct crypto_skcipher *tfm, const u8 irk[16],
+static int smp_ah(struct crypto_cipher *tfm, const u8 irk[16],
                  const u8 r[3], u8 res[3])
 {
        u8 _res[16];
@@ -759,7 +748,7 @@ static void smp_chan_destroy(struct l2cap_conn *conn)
        kzfree(smp->slave_csrk);
        kzfree(smp->link_key);
 
-       crypto_free_skcipher(smp->tfm_aes);
+       crypto_free_cipher(smp->tfm_aes);
        crypto_free_shash(smp->tfm_cmac);
 
        /* Ensure that we don't leave any debug key around if debug key
@@ -1359,9 +1348,9 @@ static struct smp_chan *smp_chan_create(struct l2cap_conn *conn)
        if (!smp)
                return NULL;
 
-       smp->tfm_aes = crypto_alloc_skcipher("ecb(aes)", 0, CRYPTO_ALG_ASYNC);
+       smp->tfm_aes = crypto_alloc_cipher("aes", 0, CRYPTO_ALG_ASYNC);
        if (IS_ERR(smp->tfm_aes)) {
-               BT_ERR("Unable to create ECB crypto context");
+               BT_ERR("Unable to create AES crypto context");
                kzfree(smp);
                return NULL;
        }
@@ -1369,7 +1358,7 @@ static struct smp_chan *smp_chan_create(struct l2cap_conn *conn)
        smp->tfm_cmac = crypto_alloc_shash("cmac(aes)", 0, 0);
        if (IS_ERR(smp->tfm_cmac)) {
                BT_ERR("Unable to create CMAC crypto context");
-               crypto_free_skcipher(smp->tfm_aes);
+               crypto_free_cipher(smp->tfm_aes);
                kzfree(smp);
                return NULL;
        }
@@ -3120,7 +3109,7 @@ static struct l2cap_chan *smp_add_cid(struct hci_dev *hdev, u16 cid)
 {
        struct l2cap_chan *chan;
        struct smp_dev *smp;
-       struct crypto_skcipher *tfm_aes;
+       struct crypto_cipher *tfm_aes;
        struct crypto_shash *tfm_cmac;
 
        if (cid == L2CAP_CID_SMP_BREDR) {
@@ -3132,9 +3121,9 @@ static struct l2cap_chan *smp_add_cid(struct hci_dev *hdev, u16 cid)
        if (!smp)
                return ERR_PTR(-ENOMEM);
 
-       tfm_aes = crypto_alloc_skcipher("ecb(aes)", 0, CRYPTO_ALG_ASYNC);
+       tfm_aes = crypto_alloc_cipher("aes", 0, CRYPTO_ALG_ASYNC);
        if (IS_ERR(tfm_aes)) {
-               BT_ERR("Unable to create ECB crypto context");
+               BT_ERR("Unable to create AES crypto context");
                kzfree(smp);
                return ERR_CAST(tfm_aes);
        }
@@ -3142,7 +3131,7 @@ static struct l2cap_chan *smp_add_cid(struct hci_dev *hdev, u16 cid)
        tfm_cmac = crypto_alloc_shash("cmac(aes)", 0, 0);
        if (IS_ERR(tfm_cmac)) {
                BT_ERR("Unable to create CMAC crypto context");
-               crypto_free_skcipher(tfm_aes);
+               crypto_free_cipher(tfm_aes);
                kzfree(smp);
                return ERR_CAST(tfm_cmac);
        }
@@ -3156,7 +3145,7 @@ create_chan:
        chan = l2cap_chan_create();
        if (!chan) {
                if (smp) {
-                       crypto_free_skcipher(smp->tfm_aes);
+                       crypto_free_cipher(smp->tfm_aes);
                        crypto_free_shash(smp->tfm_cmac);
                        kzfree(smp);
                }
@@ -3203,7 +3192,7 @@ static void smp_del_chan(struct l2cap_chan *chan)
        smp = chan->data;
        if (smp) {
                chan->data = NULL;
-               crypto_free_skcipher(smp->tfm_aes);
+               crypto_free_cipher(smp->tfm_aes);
                crypto_free_shash(smp->tfm_cmac);
                kzfree(smp);
        }
@@ -3440,7 +3429,7 @@ void smp_unregister(struct hci_dev *hdev)
 
 #if IS_ENABLED(CONFIG_BT_SELFTEST_SMP)
 
-static int __init test_ah(struct crypto_skcipher *tfm_aes)
+static int __init test_ah(struct crypto_cipher *tfm_aes)
 {
        const u8 irk[16] = {
                        0x9b, 0x7d, 0x39, 0x0a, 0xa6, 0x10, 0x10, 0x34,
@@ -3460,7 +3449,7 @@ static int __init test_ah(struct crypto_skcipher *tfm_aes)
        return 0;
 }
 
-static int __init test_c1(struct crypto_skcipher *tfm_aes)
+static int __init test_c1(struct crypto_cipher *tfm_aes)
 {
        const u8 k[16] = {
                        0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
@@ -3490,7 +3479,7 @@ static int __init test_c1(struct crypto_skcipher *tfm_aes)
        return 0;
 }
 
-static int __init test_s1(struct crypto_skcipher *tfm_aes)
+static int __init test_s1(struct crypto_cipher *tfm_aes)
 {
        const u8 k[16] = {
                        0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
@@ -3686,7 +3675,7 @@ static const struct file_operations test_smp_fops = {
        .llseek         = default_llseek,
 };
 
-static int __init run_selftests(struct crypto_skcipher *tfm_aes,
+static int __init run_selftests(struct crypto_cipher *tfm_aes,
                                struct crypto_shash *tfm_cmac)
 {
        ktime_t calltime, delta, rettime;
@@ -3764,27 +3753,27 @@ done:
 
 int __init bt_selftest_smp(void)
 {
-       struct crypto_skcipher *tfm_aes;
+       struct crypto_cipher *tfm_aes;
        struct crypto_shash *tfm_cmac;
        int err;
 
-       tfm_aes = crypto_alloc_skcipher("ecb(aes)", 0, CRYPTO_ALG_ASYNC);
+       tfm_aes = crypto_alloc_cipher("aes", 0, CRYPTO_ALG_ASYNC);
        if (IS_ERR(tfm_aes)) {
-               BT_ERR("Unable to create ECB crypto context");
+               BT_ERR("Unable to create AES crypto context");
                return PTR_ERR(tfm_aes);
        }
 
        tfm_cmac = crypto_alloc_shash("cmac(aes)", 0, CRYPTO_ALG_ASYNC);
        if (IS_ERR(tfm_cmac)) {
                BT_ERR("Unable to create CMAC crypto context");
-               crypto_free_skcipher(tfm_aes);
+               crypto_free_cipher(tfm_aes);
                return PTR_ERR(tfm_cmac);
        }
 
        err = run_selftests(tfm_aes, tfm_cmac);
 
        crypto_free_shash(tfm_cmac);
-       crypto_free_skcipher(tfm_aes);
+       crypto_free_cipher(tfm_aes);
 
        return err;
 }
index 8eecd0e..09f2694 100644 (file)
@@ -61,11 +61,11 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
        if (!br_allowed_ingress(br, br_vlan_group_rcu(br), skb, &vid))
                goto out;
 
-       if (is_broadcast_ether_addr(dest))
-               br_flood_deliver(br, skb, false);
-       else if (is_multicast_ether_addr(dest)) {
+       if (is_broadcast_ether_addr(dest)) {
+               br_flood(br, skb, false, false, true);
+       else if (is_multicast_ether_addr(dest)) {
                if (unlikely(netpoll_tx_running(dev))) {
-                       br_flood_deliver(br, skb, false);
+                       br_flood(br, skb, false, false, true);
                        goto out;
                }
                if (br_multicast_rcv(br, NULL, skb, vid)) {
@@ -76,14 +76,14 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
                mdst = br_mdb_get(br, skb, vid);
                if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) &&
                    br_multicast_querier_exists(br, eth_hdr(skb)))
-                       br_multicast_deliver(mdst, skb);
+                       br_multicast_flood(mdst, skb, false, true);
                else
-                       br_flood_deliver(br, skb, false);
-       } else if ((dst = __br_fdb_get(br, dest, vid)) != NULL)
-               br_deliver(dst->dst, skb);
-       else
-               br_flood_deliver(br, skb, true);
-
+                       br_flood(br, skb, false, false, true);
+       } else if ((dst = __br_fdb_get(br, dest, vid)) != NULL) {
+               br_forward(dst->dst, skb, false, true);
+       } else {
+               br_flood(br, skb, true, false, true);
+       }
 out:
        rcu_read_unlock();
        return NETDEV_TX_OK;
index 6c19603..63a83d8 100644 (file)
 #include <linux/netfilter_bridge.h>
 #include "br_private.h"
 
-static int deliver_clone(const struct net_bridge_port *prev,
-                        struct sk_buff *skb,
-                        void (*__packet_hook)(const struct net_bridge_port *p,
-                                              struct sk_buff *skb));
-
 /* Don't forward packets to originating port or forwarding disabled */
 static inline int should_deliver(const struct net_bridge_port *p,
                                 const struct sk_buff *skb)
@@ -75,105 +70,92 @@ int br_forward_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
 }
 EXPORT_SYMBOL_GPL(br_forward_finish);
 
-static void __br_deliver(const struct net_bridge_port *to, struct sk_buff *skb)
+static void __br_forward(const struct net_bridge_port *to,
+                        struct sk_buff *skb, bool local_orig)
 {
        struct net_bridge_vlan_group *vg;
+       struct net_device *indev;
+       struct net *net;
+       int br_hook;
 
        vg = nbp_vlan_group_rcu(to);
        skb = br_handle_vlan(to->br, vg, skb);
        if (!skb)
                return;
 
+       indev = skb->dev;
        skb->dev = to->dev;
-
-       if (unlikely(netpoll_tx_running(to->br->dev))) {
-               if (!is_skb_forwardable(skb->dev, skb))
+       if (!local_orig) {
+               if (skb_warn_if_lro(skb)) {
                        kfree_skb(skb);
-               else {
-                       skb_push(skb, ETH_HLEN);
-                       br_netpoll_send_skb(to, skb);
+                       return;
                }
-               return;
+               br_hook = NF_BR_FORWARD;
+               skb_forward_csum(skb);
+               net = dev_net(indev);
+       } else {
+               if (unlikely(netpoll_tx_running(to->br->dev))) {
+                       if (!is_skb_forwardable(skb->dev, skb)) {
+                               kfree_skb(skb);
+                       } else {
+                               skb_push(skb, ETH_HLEN);
+                               br_netpoll_send_skb(to, skb);
+                       }
+                       return;
+               }
+               br_hook = NF_BR_LOCAL_OUT;
+               net = dev_net(skb->dev);
+               indev = NULL;
        }
 
-       NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_OUT,
-               dev_net(skb->dev), NULL, skb,NULL, skb->dev,
+       NF_HOOK(NFPROTO_BRIDGE, br_hook,
+               net, NULL, skb, indev, skb->dev,
                br_forward_finish);
 }
 
-static void __br_forward(const struct net_bridge_port *to, struct sk_buff *skb)
+static int deliver_clone(const struct net_bridge_port *prev,
+                        struct sk_buff *skb, bool local_orig)
 {
-       struct net_bridge_vlan_group *vg;
-       struct net_device *indev;
-
-       if (skb_warn_if_lro(skb)) {
-               kfree_skb(skb);
-               return;
-       }
-
-       vg = nbp_vlan_group_rcu(to);
-       skb = br_handle_vlan(to->br, vg, skb);
-       if (!skb)
-               return;
-
-       indev = skb->dev;
-       skb->dev = to->dev;
-       skb_forward_csum(skb);
-
-       NF_HOOK(NFPROTO_BRIDGE, NF_BR_FORWARD,
-               dev_net(indev), NULL, skb, indev, skb->dev,
-               br_forward_finish);
-}
+       struct net_device *dev = BR_INPUT_SKB_CB(skb)->brdev;
 
-/* called with rcu_read_lock */
-void br_deliver(const struct net_bridge_port *to, struct sk_buff *skb)
-{
-       if (to && should_deliver(to, skb)) {
-               __br_deliver(to, skb);
-               return;
+       skb = skb_clone(skb, GFP_ATOMIC);
+       if (!skb) {
+               dev->stats.tx_dropped++;
+               return -ENOMEM;
        }
 
-       kfree_skb(skb);
+       __br_forward(prev, skb, local_orig);
+       return 0;
 }
-EXPORT_SYMBOL_GPL(br_deliver);
 
-/* called with rcu_read_lock */
-void br_forward(const struct net_bridge_port *to, struct sk_buff *skb, struct sk_buff *skb0)
+/**
+ * br_forward - forward a packet to a specific port
+ * @to: destination port
+ * @skb: packet being forwarded
+ * @local_rcv: packet will be received locally after forwarding
+ * @local_orig: packet is locally originated
+ *
+ * Should be called with rcu_read_lock.
+ */
+void br_forward(const struct net_bridge_port *to,
+               struct sk_buff *skb, bool local_rcv, bool local_orig)
 {
        if (to && should_deliver(to, skb)) {
-               if (skb0)
-                       deliver_clone(to, skb, __br_forward);
+               if (local_rcv)
+                       deliver_clone(to, skb, local_orig);
                else
-                       __br_forward(to, skb);
+                       __br_forward(to, skb, local_orig);
                return;
        }
 
-       if (!skb0)
+       if (!local_rcv)
                kfree_skb(skb);
 }
-
-static int deliver_clone(const struct net_bridge_port *prev,
-                        struct sk_buff *skb,
-                        void (*__packet_hook)(const struct net_bridge_port *p,
-                                              struct sk_buff *skb))
-{
-       struct net_device *dev = BR_INPUT_SKB_CB(skb)->brdev;
-
-       skb = skb_clone(skb, GFP_ATOMIC);
-       if (!skb) {
-               dev->stats.tx_dropped++;
-               return -ENOMEM;
-       }
-
-       __packet_hook(prev, skb);
-       return 0;
-}
+EXPORT_SYMBOL_GPL(br_forward);
 
 static struct net_bridge_port *maybe_deliver(
        struct net_bridge_port *prev, struct net_bridge_port *p,
-       struct sk_buff *skb,
-       void (*__packet_hook)(const struct net_bridge_port *p,
-                             struct sk_buff *skb))
+       struct sk_buff *skb, bool local_orig)
 {
        int err;
 
@@ -183,7 +165,7 @@ static struct net_bridge_port *maybe_deliver(
        if (!prev)
                goto out;
 
-       err = deliver_clone(prev, skb, __packet_hook);
+       err = deliver_clone(prev, skb, local_orig);
        if (err)
                return ERR_PTR(err);
 
@@ -191,20 +173,14 @@ out:
        return p;
 }
 
-/* called under bridge lock */
-static void br_flood(struct net_bridge *br, struct sk_buff *skb,
-                    struct sk_buff *skb0,
-                    void (*__packet_hook)(const struct net_bridge_port *p,
-                                          struct sk_buff *skb),
-                    bool unicast)
+/* called under rcu_read_lock */
+void br_flood(struct net_bridge *br, struct sk_buff *skb,
+             bool unicast, bool local_rcv, bool local_orig)
 {
        u8 igmp_type = br_multicast_igmp_type(skb);
-       __be16 proto = skb->protocol;
-       struct net_bridge_port *prev;
+       struct net_bridge_port *prev = NULL;
        struct net_bridge_port *p;
 
-       prev = NULL;
-
        list_for_each_entry_rcu(p, &br->port_list, list) {
                /* Do not flood unicast traffic to ports that turn it off */
                if (unicast && !(p->flags & BR_FLOOD))
@@ -217,57 +193,39 @@ static void br_flood(struct net_bridge *br, struct sk_buff *skb,
                    BR_INPUT_SKB_CB(skb)->proxyarp_replied)
                        continue;
 
-               prev = maybe_deliver(prev, p, skb, __packet_hook);
+               prev = maybe_deliver(prev, p, skb, local_orig);
                if (IS_ERR(prev))
                        goto out;
                if (prev == p)
-                       br_multicast_count(p->br, p, proto, igmp_type,
+                       br_multicast_count(p->br, p, skb, igmp_type,
                                           BR_MCAST_DIR_TX);
        }
 
        if (!prev)
                goto out;
 
-       if (skb0)
-               deliver_clone(prev, skb, __packet_hook);
+       if (local_rcv)
+               deliver_clone(prev, skb, local_orig);
        else
-               __packet_hook(prev, skb);
+               __br_forward(prev, skb, local_orig);
        return;
 
 out:
-       if (!skb0)
+       if (!local_rcv)
                kfree_skb(skb);
 }
 
-
-/* called with rcu_read_lock */
-void br_flood_deliver(struct net_bridge *br, struct sk_buff *skb, bool unicast)
-{
-       br_flood(br, skb, NULL, __br_deliver, unicast);
-}
-
-/* called under bridge lock */
-void br_flood_forward(struct net_bridge *br, struct sk_buff *skb,
-                     struct sk_buff *skb2, bool unicast)
-{
-       br_flood(br, skb, skb2, __br_forward, unicast);
-}
-
 #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
 /* called with rcu_read_lock */
-static void br_multicast_flood(struct net_bridge_mdb_entry *mdst,
-                              struct sk_buff *skb, struct sk_buff *skb0,
-                              void (*__packet_hook)(
-                                       const struct net_bridge_port *p,
-                                       struct sk_buff *skb))
+void br_multicast_flood(struct net_bridge_mdb_entry *mdst,
+                       struct sk_buff *skb,
+                       bool local_rcv, bool local_orig)
 {
        struct net_device *dev = BR_INPUT_SKB_CB(skb)->brdev;
        u8 igmp_type = br_multicast_igmp_type(skb);
        struct net_bridge *br = netdev_priv(dev);
        struct net_bridge_port *prev = NULL;
        struct net_bridge_port_group *p;
-       __be16 proto = skb->protocol;
-
        struct hlist_node *rp;
 
        rp = rcu_dereference(hlist_first_rcu(&br->router_list));
@@ -282,11 +240,11 @@ static void br_multicast_flood(struct net_bridge_mdb_entry *mdst,
                port = (unsigned long)lport > (unsigned long)rport ?
                       lport : rport;
 
-               prev = maybe_deliver(prev, port, skb, __packet_hook);
+               prev = maybe_deliver(prev, port, skb, local_orig);
                if (IS_ERR(prev))
                        goto out;
                if (prev == port)
-                       br_multicast_count(port->br, port, proto, igmp_type,
+                       br_multicast_count(port->br, port, skb, igmp_type,
                                           BR_MCAST_DIR_TX);
 
                if ((unsigned long)lport >= (unsigned long)port)
@@ -298,28 +256,14 @@ static void br_multicast_flood(struct net_bridge_mdb_entry *mdst,
        if (!prev)
                goto out;
 
-       if (skb0)
-               deliver_clone(prev, skb, __packet_hook);
+       if (local_rcv)
+               deliver_clone(prev, skb, local_orig);
        else
-               __packet_hook(prev, skb);
+               __br_forward(prev, skb, local_orig);
        return;
 
 out:
-       if (!skb0)
+       if (!local_rcv)
                kfree_skb(skb);
 }
-
-/* called with rcu_read_lock */
-void br_multicast_deliver(struct net_bridge_mdb_entry *mdst,
-                         struct sk_buff *skb)
-{
-       br_multicast_flood(mdst, skb, NULL, __br_deliver);
-}
-
-/* called with rcu_read_lock */
-void br_multicast_forward(struct net_bridge_mdb_entry *mdst,
-                         struct sk_buff *skb, struct sk_buff *skb2)
-{
-       br_multicast_flood(mdst, skb, skb2, __br_forward);
-}
 #endif
index 786602b..8b08eec 100644 (file)
@@ -61,7 +61,7 @@ static int br_pass_frame_up(struct sk_buff *skb)
        if (!skb)
                return NET_RX_DROP;
        /* update the multicast stats if the packet is IGMP/MLD */
-       br_multicast_count(br, NULL, skb->protocol, br_multicast_igmp_type(skb),
+       br_multicast_count(br, NULL, skb, br_multicast_igmp_type(skb),
                           BR_MCAST_DIR_TX);
 
        return NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_IN,
@@ -131,13 +131,12 @@ static void br_do_proxy_arp(struct sk_buff *skb, struct net_bridge *br,
 /* note: already called with rcu_read_lock */
 int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
 {
-       const unsigned char *dest = eth_hdr(skb)->h_dest;
+       bool local_rcv = false, mcast_hit = false, unicast = true;
        struct net_bridge_port *p = br_port_get_rcu(skb->dev);
-       struct net_bridge *br;
-       struct net_bridge_fdb_entry *dst;
+       const unsigned char *dest = eth_hdr(skb)->h_dest;
+       struct net_bridge_fdb_entry *dst = NULL;
        struct net_bridge_mdb_entry *mdst;
-       struct sk_buff *skb2;
-       bool unicast = true;
+       struct net_bridge *br;
        u16 vid = 0;
 
        if (!p || p->state == BR_STATE_DISABLED)
@@ -160,53 +159,46 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
 
        BR_INPUT_SKB_CB(skb)->brdev = br->dev;
 
-       /* The packet skb2 goes to the local host (NULL to skip). */
-       skb2 = NULL;
-
-       if (br->dev->flags & IFF_PROMISC)
-               skb2 = skb;
-
-       dst = NULL;
+       local_rcv = !!(br->dev->flags & IFF_PROMISC);
 
        if (IS_ENABLED(CONFIG_INET) && skb->protocol == htons(ETH_P_ARP))
                br_do_proxy_arp(skb, br, vid, p);
 
        if (is_broadcast_ether_addr(dest)) {
-               skb2 = skb;
+               local_rcv = true;
                unicast = false;
        } else if (is_multicast_ether_addr(dest)) {
                mdst = br_mdb_get(br, skb, vid);
                if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) &&
                    br_multicast_querier_exists(br, eth_hdr(skb))) {
                        if ((mdst && mdst->mglist) ||
-                           br_multicast_is_router(br))
-                               skb2 = skb;
-                       br_multicast_forward(mdst, skb, skb2);
-                       skb = NULL;
-                       if (!skb2)
-                               goto out;
-               } else
-                       skb2 = skb;
-
+                           br_multicast_is_router(br)) {
+                               local_rcv = true;
+                               br->dev->stats.multicast++;
+                       }
+                       mcast_hit = true;
+               } else {
+                       local_rcv = true;
+                       br->dev->stats.multicast++;
+               }
                unicast = false;
-               br->dev->stats.multicast++;
-       } else if ((dst = __br_fdb_get(br, dest, vid)) &&
-                       dst->is_local) {
-               skb2 = skb;
+       } else if ((dst = __br_fdb_get(br, dest, vid)) && dst->is_local) {
                /* Do not forward the packet since it's local. */
-               skb = NULL;
+               return br_pass_frame_up(skb);
        }
 
-       if (skb) {
-               if (dst) {
-                       dst->used = jiffies;
-                       br_forward(dst->dst, skb, skb2);
-               } else
-                       br_flood_forward(br, skb, skb2, unicast);
+       if (dst) {
+               dst->used = jiffies;
+               br_forward(dst->dst, skb, local_rcv, false);
+       } else {
+               if (!mcast_hit)
+                       br_flood(br, skb, unicast, local_rcv, false);
+               else
+                       br_multicast_flood(mdst, skb, local_rcv, false);
        }
 
-       if (skb2)
-               return br_pass_frame_up(skb2);
+       if (local_rcv)
+               return br_pass_frame_up(skb);
 
 out:
        return 0;
index e405eef..a5423a1 100644 (file)
@@ -843,14 +843,14 @@ static void __br_multicast_send_query(struct net_bridge *br,
 
        if (port) {
                skb->dev = port->dev;
-               br_multicast_count(br, port, skb->protocol, igmp_type,
+               br_multicast_count(br, port, skb, igmp_type,
                                   BR_MCAST_DIR_TX);
                NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_OUT,
                        dev_net(port->dev), NULL, skb, NULL, skb->dev,
                        br_dev_queue_push_xmit);
        } else {
                br_multicast_select_own_querier(br, ip, skb);
-               br_multicast_count(br, port, skb->protocol, igmp_type,
+               br_multicast_count(br, port, skb, igmp_type,
                                   BR_MCAST_DIR_RX);
                netif_rx(skb);
        }
@@ -1676,7 +1676,7 @@ static int br_multicast_ipv4_rcv(struct net_bridge *br,
        if (skb_trimmed && skb_trimmed != skb)
                kfree_skb(skb_trimmed);
 
-       br_multicast_count(br, port, skb->protocol, BR_INPUT_SKB_CB(skb)->igmp,
+       br_multicast_count(br, port, skb, BR_INPUT_SKB_CB(skb)->igmp,
                           BR_MCAST_DIR_RX);
 
        return err;
@@ -1725,7 +1725,7 @@ static int br_multicast_ipv6_rcv(struct net_bridge *br,
        if (skb_trimmed && skb_trimmed != skb)
                kfree_skb(skb_trimmed);
 
-       br_multicast_count(br, port, skb->protocol, BR_INPUT_SKB_CB(skb)->igmp,
+       br_multicast_count(br, port, skb, BR_INPUT_SKB_CB(skb)->igmp,
                           BR_MCAST_DIR_RX);
 
        return err;
@@ -2251,13 +2251,16 @@ unlock:
 EXPORT_SYMBOL_GPL(br_multicast_has_querier_adjacent);
 
 static void br_mcast_stats_add(struct bridge_mcast_stats __percpu *stats,
-                              __be16 proto, u8 type, u8 dir)
+                              const struct sk_buff *skb, u8 type, u8 dir)
 {
        struct bridge_mcast_stats *pstats = this_cpu_ptr(stats);
+       __be16 proto = skb->protocol;
+       unsigned int t_len;
 
        u64_stats_update_begin(&pstats->syncp);
        switch (proto) {
        case htons(ETH_P_IP):
+               t_len = ntohs(ip_hdr(skb)->tot_len) - ip_hdrlen(skb);
                switch (type) {
                case IGMP_HOST_MEMBERSHIP_REPORT:
                        pstats->mstats.igmp_v1reports[dir]++;
@@ -2269,7 +2272,21 @@ static void br_mcast_stats_add(struct bridge_mcast_stats __percpu *stats,
                        pstats->mstats.igmp_v3reports[dir]++;
                        break;
                case IGMP_HOST_MEMBERSHIP_QUERY:
-                       pstats->mstats.igmp_queries[dir]++;
+                       if (t_len != sizeof(struct igmphdr)) {
+                               pstats->mstats.igmp_v3queries[dir]++;
+                       } else {
+                               unsigned int offset = skb_transport_offset(skb);
+                               struct igmphdr *ih, _ihdr;
+
+                               ih = skb_header_pointer(skb, offset,
+                                                       sizeof(_ihdr), &_ihdr);
+                               if (!ih)
+                                       break;
+                               if (!ih->code)
+                                       pstats->mstats.igmp_v1queries[dir]++;
+                               else
+                                       pstats->mstats.igmp_v2queries[dir]++;
+                       }
                        break;
                case IGMP_HOST_LEAVE_MESSAGE:
                        pstats->mstats.igmp_leaves[dir]++;
@@ -2278,6 +2295,9 @@ static void br_mcast_stats_add(struct bridge_mcast_stats __percpu *stats,
                break;
 #if IS_ENABLED(CONFIG_IPV6)
        case htons(ETH_P_IPV6):
+               t_len = ntohs(ipv6_hdr(skb)->payload_len) +
+                       sizeof(struct ipv6hdr);
+               t_len -= skb_network_header_len(skb);
                switch (type) {
                case ICMPV6_MGM_REPORT:
                        pstats->mstats.mld_v1reports[dir]++;
@@ -2286,7 +2306,10 @@ static void br_mcast_stats_add(struct bridge_mcast_stats __percpu *stats,
                        pstats->mstats.mld_v2reports[dir]++;
                        break;
                case ICMPV6_MGM_QUERY:
-                       pstats->mstats.mld_queries[dir]++;
+                       if (t_len != sizeof(struct mld_msg))
+                               pstats->mstats.mld_v2queries[dir]++;
+                       else
+                               pstats->mstats.mld_v1queries[dir]++;
                        break;
                case ICMPV6_MGM_REDUCTION:
                        pstats->mstats.mld_leaves[dir]++;
@@ -2299,7 +2322,7 @@ static void br_mcast_stats_add(struct bridge_mcast_stats __percpu *stats,
 }
 
 void br_multicast_count(struct net_bridge *br, const struct net_bridge_port *p,
-                       __be16 proto, u8 type, u8 dir)
+                       const struct sk_buff *skb, u8 type, u8 dir)
 {
        struct bridge_mcast_stats __percpu *stats;
 
@@ -2314,7 +2337,7 @@ void br_multicast_count(struct net_bridge *br, const struct net_bridge_port *p,
        if (WARN_ON(!stats))
                return;
 
-       br_mcast_stats_add(stats, proto, type, dir);
+       br_mcast_stats_add(stats, skb, type, dir);
 }
 
 int br_multicast_init_stats(struct net_bridge *br)
@@ -2359,14 +2382,17 @@ void br_multicast_get_stats(const struct net_bridge *br,
                        memcpy(&temp, &cpu_stats->mstats, sizeof(temp));
                } while (u64_stats_fetch_retry_irq(&cpu_stats->syncp, start));
 
-               mcast_stats_add_dir(tdst.igmp_queries, temp.igmp_queries);
+               mcast_stats_add_dir(tdst.igmp_v1queries, temp.igmp_v1queries);
+               mcast_stats_add_dir(tdst.igmp_v2queries, temp.igmp_v2queries);
+               mcast_stats_add_dir(tdst.igmp_v3queries, temp.igmp_v3queries);
                mcast_stats_add_dir(tdst.igmp_leaves, temp.igmp_leaves);
                mcast_stats_add_dir(tdst.igmp_v1reports, temp.igmp_v1reports);
                mcast_stats_add_dir(tdst.igmp_v2reports, temp.igmp_v2reports);
                mcast_stats_add_dir(tdst.igmp_v3reports, temp.igmp_v3reports);
                tdst.igmp_parse_errors += temp.igmp_parse_errors;
 
-               mcast_stats_add_dir(tdst.mld_queries, temp.mld_queries);
+               mcast_stats_add_dir(tdst.mld_v1queries, temp.mld_v1queries);
+               mcast_stats_add_dir(tdst.mld_v2queries, temp.mld_v2queries);
                mcast_stats_add_dir(tdst.mld_leaves, temp.mld_leaves);
                mcast_stats_add_dir(tdst.mld_v1reports, temp.mld_v1reports);
                mcast_stats_add_dir(tdst.mld_v2reports, temp.mld_v2reports);
index 4dc8511..b308826 100644 (file)
@@ -505,14 +505,12 @@ int br_fdb_external_learn_del(struct net_bridge *br, struct net_bridge_port *p,
                              const unsigned char *addr, u16 vid);
 
 /* br_forward.c */
-void br_deliver(const struct net_bridge_port *to, struct sk_buff *skb);
 int br_dev_queue_push_xmit(struct net *net, struct sock *sk, struct sk_buff *skb);
-void br_forward(const struct net_bridge_port *to,
-               struct sk_buff *skb, struct sk_buff *skb0);
+void br_forward(const struct net_bridge_port *to, struct sk_buff *skb,
+               bool local_rcv, bool local_orig);
 int br_forward_finish(struct net *net, struct sock *sk, struct sk_buff *skb);
-void br_flood_deliver(struct net_bridge *br, struct sk_buff *skb, bool unicast);
-void br_flood_forward(struct net_bridge *br, struct sk_buff *skb,
-                     struct sk_buff *skb2, bool unicast);
+void br_flood(struct net_bridge *br, struct sk_buff *skb,
+             bool unicast, bool local_rcv, bool local_orig);
 
 /* br_if.c */
 void br_port_carrier_check(struct net_bridge_port *p);
@@ -560,10 +558,8 @@ void br_multicast_init(struct net_bridge *br);
 void br_multicast_open(struct net_bridge *br);
 void br_multicast_stop(struct net_bridge *br);
 void br_multicast_dev_del(struct net_bridge *br);
-void br_multicast_deliver(struct net_bridge_mdb_entry *mdst,
-                         struct sk_buff *skb);
-void br_multicast_forward(struct net_bridge_mdb_entry *mdst,
-                         struct sk_buff *skb, struct sk_buff *skb2);
+void br_multicast_flood(struct net_bridge_mdb_entry *mdst,
+                       struct sk_buff *skb, bool local_rcv, bool local_orig);
 int br_multicast_set_router(struct net_bridge *br, unsigned long val);
 int br_multicast_set_port_router(struct net_bridge_port *p, unsigned long val);
 int br_multicast_toggle(struct net_bridge *br, unsigned long val);
@@ -586,7 +582,7 @@ void br_mdb_notify(struct net_device *dev, struct net_bridge_port *port,
 void br_rtr_notify(struct net_device *dev, struct net_bridge_port *port,
                   int type);
 void br_multicast_count(struct net_bridge *br, const struct net_bridge_port *p,
-                       __be16 proto, u8 type, u8 dir);
+                       const struct sk_buff *skb, u8 type, u8 dir);
 int br_multicast_init_stats(struct net_bridge *br);
 void br_multicast_get_stats(const struct net_bridge *br,
                            const struct net_bridge_port *p,
@@ -691,35 +687,35 @@ static inline void br_multicast_dev_del(struct net_bridge *br)
 {
 }
 
-static inline void br_multicast_deliver(struct net_bridge_mdb_entry *mdst,
-                                       struct sk_buff *skb)
+static inline void br_multicast_flood(struct net_bridge_mdb_entry *mdst,
+                                     struct sk_buff *skb,
+                                     bool local_rcv, bool local_orig)
 {
 }
 
-static inline void br_multicast_forward(struct net_bridge_mdb_entry *mdst,
-                                       struct sk_buff *skb,
-                                       struct sk_buff *skb2)
-{
-}
 static inline bool br_multicast_is_router(struct net_bridge *br)
 {
        return 0;
 }
+
 static inline bool br_multicast_querier_exists(struct net_bridge *br,
                                               struct ethhdr *eth)
 {
        return false;
 }
+
 static inline void br_mdb_init(void)
 {
 }
+
 static inline void br_mdb_uninit(void)
 {
 }
 
 static inline void br_multicast_count(struct net_bridge *br,
                                      const struct net_bridge_port *p,
-                                     __be16 proto, u8 type, u8 dir)
+                                     const struct sk_buff *skb,
+                                     u8 type, u8 dir)
 {
 }
 
index 77f7e7a..0b77ffb 100644 (file)
@@ -72,7 +72,7 @@ static void nft_reject_br_send_v4_tcp_reset(struct net *net,
 
        nft_reject_br_push_etherhdr(oldskb, nskb);
 
-       br_deliver(br_port_get_rcu(dev), nskb);
+       br_forward(br_port_get_rcu(dev), nskb, false, true);
 }
 
 static void nft_reject_br_send_v4_unreach(struct net *net,
@@ -140,7 +140,7 @@ static void nft_reject_br_send_v4_unreach(struct net *net,
 
        nft_reject_br_push_etherhdr(oldskb, nskb);
 
-       br_deliver(br_port_get_rcu(dev), nskb);
+       br_forward(br_port_get_rcu(dev), nskb, false, true);
 }
 
 static void nft_reject_br_send_v6_tcp_reset(struct net *net,
@@ -174,7 +174,7 @@ static void nft_reject_br_send_v6_tcp_reset(struct net *net,
 
        nft_reject_br_push_etherhdr(oldskb, nskb);
 
-       br_deliver(br_port_get_rcu(dev), nskb);
+       br_forward(br_port_get_rcu(dev), nskb, false, true);
 }
 
 static bool reject6_br_csum_ok(struct sk_buff *skb, int hook)
@@ -255,7 +255,7 @@ static void nft_reject_br_send_v6_unreach(struct net *net,
 
        nft_reject_br_push_etherhdr(oldskb, nskb);
 
-       br_deliver(br_port_get_rcu(dev), nskb);
+       br_forward(br_port_get_rcu(dev), nskb, false, true);
 }
 
 static void nft_reject_bridge_eval(const struct nft_expr *expr,
index b92d63b..2a9c39f 100644 (file)
@@ -94,6 +94,7 @@
 #include <linux/ethtool.h>
 #include <linux/notifier.h>
 #include <linux/skbuff.h>
+#include <linux/bpf.h>
 #include <net/net_namespace.h>
 #include <net/sock.h>
 #include <net/busy_poll.h>
@@ -4972,7 +4973,7 @@ bool sk_busy_loop(struct sock *sk, int nonblock)
 
                        if (test_bit(NAPI_STATE_SCHED, &napi->state)) {
                                rc = napi->poll(napi, BUSY_POLL_BUDGET);
-                               trace_napi_poll(napi);
+                               trace_napi_poll(napi, rc, BUSY_POLL_BUDGET);
                                if (rc == BUSY_POLL_BUDGET) {
                                        napi_complete_done(napi, rc);
                                        napi_schedule(napi);
@@ -5128,7 +5129,7 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
        work = 0;
        if (test_bit(NAPI_STATE_SCHED, &n->state)) {
                work = n->poll(n, weight);
-               trace_napi_poll(n);
+               trace_napi_poll(n, work, weight);
        }
 
        WARN_ON_ONCE(work > weight);
@@ -6614,6 +6615,38 @@ int dev_change_proto_down(struct net_device *dev, bool proto_down)
 }
 EXPORT_SYMBOL(dev_change_proto_down);
 
+/**
+ *     dev_change_xdp_fd - set or clear a bpf program for a device rx path
+ *     @dev: device
+ *     @fd: new program fd or negative value to clear
+ *
+ *     Set or clear a bpf program for a device
+ */
+int dev_change_xdp_fd(struct net_device *dev, int fd)
+{
+       const struct net_device_ops *ops = dev->netdev_ops;
+       struct bpf_prog *prog = NULL;
+       struct netdev_xdp xdp = {};
+       int err;
+
+       if (!ops->ndo_xdp)
+               return -EOPNOTSUPP;
+       if (fd >= 0) {
+               prog = bpf_prog_get_type(fd, BPF_PROG_TYPE_XDP);
+               if (IS_ERR(prog))
+                       return PTR_ERR(prog);
+       }
+
+       xdp.command = XDP_SETUP_PROG;
+       xdp.prog = prog;
+       err = ops->ndo_xdp(dev, &xdp);
+       if (err < 0 && prog)
+               bpf_prog_put(prog);
+
+       return err;
+}
+EXPORT_SYMBOL(dev_change_xdp_fd);
+
 /**
  *     dev_new_index   -       allocate an ifindex
  *     @net: the applicable net namespace
index b2e592a..1b50630 100644 (file)
 #include <net/net_namespace.h>
 #include <net/sock.h>
 #include <net/devlink.h>
+#define CREATE_TRACE_POINTS
+#include <trace/events/devlink.h>
+
+EXPORT_TRACEPOINT_SYMBOL_GPL(devlink_hwmsg);
 
 static LIST_HEAD(devlink_list);
 
index 252e155..d6b3b57 100644 (file)
@@ -187,7 +187,8 @@ static void trace_kfree_skb_hit(void *ignore, struct sk_buff *skb, void *locatio
        trace_drop_common(skb, location);
 }
 
-static void trace_napi_poll_hit(void *ignore, struct napi_struct *napi)
+static void trace_napi_poll_hit(void *ignore, struct napi_struct *napi,
+                               int work, int budget)
 {
        struct dm_hw_stat_delta *new_stat;
 
index 10c4a2f..0b52135 100644 (file)
 #include <net/sock_reuseport.h>
 
 /**
- *     sk_filter - run a packet through a socket filter
+ *     sk_filter_trim_cap - run a packet through a socket filter
  *     @sk: sock associated with &sk_buff
  *     @skb: buffer to filter
+ *     @cap: limit on how short the eBPF program may trim the packet
  *
  * Run the eBPF program and then cut skb->data to correct size returned by
  * the program. If pkt_len is 0 we toss packet. If skb->len is smaller
@@ -64,7 +65,7 @@
  * be accepted or -EPERM if the packet should be tossed.
  *
  */
-int sk_filter(struct sock *sk, struct sk_buff *skb)
+int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap)
 {
        int err;
        struct sk_filter *filter;
@@ -85,14 +86,13 @@ int sk_filter(struct sock *sk, struct sk_buff *skb)
        filter = rcu_dereference(sk->sk_filter);
        if (filter) {
                unsigned int pkt_len = bpf_prog_run_save_cb(filter->prog, skb);
-
-               err = pkt_len ? pskb_trim(skb, pkt_len) : -EPERM;
+               err = pkt_len ? pskb_trim(skb, max(cap, pkt_len)) : -EPERM;
        }
        rcu_read_unlock();
 
        return err;
 }
-EXPORT_SYMBOL(sk_filter);
+EXPORT_SYMBOL(sk_filter_trim_cap);
 
 static u64 __skb_get_pay_offset(u64 ctx, u64 a, u64 x, u64 r4, u64 r5)
 {
@@ -2025,6 +2025,47 @@ bool bpf_helper_changes_skb_data(void *func)
        return false;
 }
 
+static unsigned long bpf_skb_copy(void *dst_buff, const void *skb,
+                                 unsigned long len)
+{
+       void *ptr = skb_header_pointer(skb, 0, len, dst_buff);
+
+       if (unlikely(!ptr))
+               return len;
+       if (ptr != dst_buff)
+               memcpy(dst_buff, ptr, len);
+
+       return 0;
+}
+
+static u64 bpf_skb_event_output(u64 r1, u64 r2, u64 flags, u64 r4,
+                               u64 meta_size)
+{
+       struct sk_buff *skb = (struct sk_buff *)(long) r1;
+       struct bpf_map *map = (struct bpf_map *)(long) r2;
+       u64 skb_size = (flags & BPF_F_CTXLEN_MASK) >> 32;
+       void *meta = (void *)(long) r4;
+
+       if (unlikely(flags & ~(BPF_F_CTXLEN_MASK | BPF_F_INDEX_MASK)))
+               return -EINVAL;
+       if (unlikely(skb_size > skb->len))
+               return -EFAULT;
+
+       return bpf_event_output(map, flags, meta, meta_size, skb, skb_size,
+                               bpf_skb_copy);
+}
+
+static const struct bpf_func_proto bpf_skb_event_output_proto = {
+       .func           = bpf_skb_event_output,
+       .gpl_only       = true,
+       .ret_type       = RET_INTEGER,
+       .arg1_type      = ARG_PTR_TO_CTX,
+       .arg2_type      = ARG_CONST_MAP_PTR,
+       .arg3_type      = ARG_ANYTHING,
+       .arg4_type      = ARG_PTR_TO_STACK,
+       .arg5_type      = ARG_CONST_STACK_SIZE,
+};
+
 static unsigned short bpf_tunnel_key_af(u64 flags)
 {
        return flags & BPF_F_TUNINFO_IPV6 ? AF_INET6 : AF_INET;
@@ -2357,7 +2398,7 @@ tc_cls_act_func_proto(enum bpf_func_id func_id)
        case BPF_FUNC_get_hash_recalc:
                return &bpf_get_hash_recalc_proto;
        case BPF_FUNC_perf_event_output:
-               return bpf_get_event_output_proto();
+               return &bpf_skb_event_output_proto;
        case BPF_FUNC_get_smp_processor_id:
                return &bpf_get_smp_processor_id_proto;
 #ifdef CONFIG_SOCK_CGROUP_DATA
@@ -2369,6 +2410,12 @@ tc_cls_act_func_proto(enum bpf_func_id func_id)
        }
 }
 
+static const struct bpf_func_proto *
+xdp_func_proto(enum bpf_func_id func_id)
+{
+       return sk_filter_func_proto(func_id);
+}
+
 static bool __is_valid_access(int off, int size, enum bpf_access_type type)
 {
        if (off < 0 || off >= sizeof(struct __sk_buff))
@@ -2436,6 +2483,44 @@ static bool tc_cls_act_is_valid_access(int off, int size,
        return __is_valid_access(off, size, type);
 }
 
+static bool __is_valid_xdp_access(int off, int size,
+                                 enum bpf_access_type type)
+{
+       if (off < 0 || off >= sizeof(struct xdp_md))
+               return false;
+       if (off % size != 0)
+               return false;
+       if (size != 4)
+               return false;
+
+       return true;
+}
+
+static bool xdp_is_valid_access(int off, int size,
+                               enum bpf_access_type type,
+                               enum bpf_reg_type *reg_type)
+{
+       if (type == BPF_WRITE)
+               return false;
+
+       switch (off) {
+       case offsetof(struct xdp_md, data):
+               *reg_type = PTR_TO_PACKET;
+               break;
+       case offsetof(struct xdp_md, data_end):
+               *reg_type = PTR_TO_PACKET_END;
+               break;
+       }
+
+       return __is_valid_xdp_access(off, size, type);
+}
+
+void bpf_warn_invalid_xdp_action(u32 act)
+{
+       WARN_ONCE(1, "Illegal XDP return value %u, expect packet loss\n", act);
+}
+EXPORT_SYMBOL_GPL(bpf_warn_invalid_xdp_action);
+
 static u32 bpf_net_convert_ctx_access(enum bpf_access_type type, int dst_reg,
                                      int src_reg, int ctx_off,
                                      struct bpf_insn *insn_buf,
@@ -2587,6 +2672,29 @@ static u32 bpf_net_convert_ctx_access(enum bpf_access_type type, int dst_reg,
        return insn - insn_buf;
 }
 
+static u32 xdp_convert_ctx_access(enum bpf_access_type type, int dst_reg,
+                                 int src_reg, int ctx_off,
+                                 struct bpf_insn *insn_buf,
+                                 struct bpf_prog *prog)
+{
+       struct bpf_insn *insn = insn_buf;
+
+       switch (ctx_off) {
+       case offsetof(struct xdp_md, data):
+               *insn++ = BPF_LDX_MEM(bytes_to_bpf_size(FIELD_SIZEOF(struct xdp_buff, data)),
+                                     dst_reg, src_reg,
+                                     offsetof(struct xdp_buff, data));
+               break;
+       case offsetof(struct xdp_md, data_end):
+               *insn++ = BPF_LDX_MEM(bytes_to_bpf_size(FIELD_SIZEOF(struct xdp_buff, data_end)),
+                                     dst_reg, src_reg,
+                                     offsetof(struct xdp_buff, data_end));
+               break;
+       }
+
+       return insn - insn_buf;
+}
+
 static const struct bpf_verifier_ops sk_filter_ops = {
        .get_func_proto         = sk_filter_func_proto,
        .is_valid_access        = sk_filter_is_valid_access,
@@ -2599,6 +2707,12 @@ static const struct bpf_verifier_ops tc_cls_act_ops = {
        .convert_ctx_access     = bpf_net_convert_ctx_access,
 };
 
+static const struct bpf_verifier_ops xdp_ops = {
+       .get_func_proto         = xdp_func_proto,
+       .is_valid_access        = xdp_is_valid_access,
+       .convert_ctx_access     = xdp_convert_ctx_access,
+};
+
 static struct bpf_prog_type_list sk_filter_type __read_mostly = {
        .ops    = &sk_filter_ops,
        .type   = BPF_PROG_TYPE_SOCKET_FILTER,
@@ -2614,11 +2728,17 @@ static struct bpf_prog_type_list sched_act_type __read_mostly = {
        .type   = BPF_PROG_TYPE_SCHED_ACT,
 };
 
+static struct bpf_prog_type_list xdp_type __read_mostly = {
+       .ops    = &xdp_ops,
+       .type   = BPF_PROG_TYPE_XDP,
+};
+
 static int __init register_sk_filter_ops(void)
 {
        bpf_register_prog_type(&sk_filter_type);
        bpf_register_prog_type(&sched_cls_type);
        bpf_register_prog_type(&sched_act_type);
+       bpf_register_prog_type(&xdp_type);
 
        return 0;
 }
index 94acfc8..53599bd 100644 (file)
@@ -163,7 +163,7 @@ static void poll_one_napi(struct napi_struct *napi)
         */
        work = napi->poll(napi, 0);
        WARN_ONCE(work, "%pF exceeded budget in poll\n", napi->poll);
-       trace_napi_poll(napi);
+       trace_napi_poll(napi, work, 0);
 
        clear_bit(NAPI_STATE_NPSVC, &napi->state);
 }
index a9e3805..189cc78 100644 (file)
@@ -891,6 +891,16 @@ static size_t rtnl_port_size(const struct net_device *dev,
                return port_self_size;
 }
 
+static size_t rtnl_xdp_size(const struct net_device *dev)
+{
+       size_t xdp_size = nla_total_size(1);    /* XDP_ATTACHED */
+
+       if (!dev->netdev_ops->ndo_xdp)
+               return 0;
+       else
+               return xdp_size;
+}
+
 static noinline size_t if_nlmsg_size(const struct net_device *dev,
                                     u32 ext_filter_mask)
 {
@@ -927,6 +937,7 @@ static noinline size_t if_nlmsg_size(const struct net_device *dev,
               + nla_total_size(MAX_PHYS_ITEM_ID_LEN) /* IFLA_PHYS_PORT_ID */
               + nla_total_size(MAX_PHYS_ITEM_ID_LEN) /* IFLA_PHYS_SWITCH_ID */
               + nla_total_size(IFNAMSIZ) /* IFLA_PHYS_PORT_NAME */
+              + rtnl_xdp_size(dev) /* IFLA_XDP */
               + nla_total_size(1); /* IFLA_PROTO_DOWN */
 
 }
@@ -1211,6 +1222,33 @@ static int rtnl_fill_link_ifmap(struct sk_buff *skb, struct net_device *dev)
        return 0;
 }
 
+static int rtnl_xdp_fill(struct sk_buff *skb, struct net_device *dev)
+{
+       struct netdev_xdp xdp_op = {};
+       struct nlattr *xdp;
+       int err;
+
+       if (!dev->netdev_ops->ndo_xdp)
+               return 0;
+       xdp = nla_nest_start(skb, IFLA_XDP);
+       if (!xdp)
+               return -EMSGSIZE;
+       xdp_op.command = XDP_QUERY_PROG;
+       err = dev->netdev_ops->ndo_xdp(dev, &xdp_op);
+       if (err)
+               goto err_cancel;
+       err = nla_put_u8(skb, IFLA_XDP_ATTACHED, xdp_op.prog_attached);
+       if (err)
+               goto err_cancel;
+
+       nla_nest_end(skb, xdp);
+       return 0;
+
+err_cancel:
+       nla_nest_cancel(skb, xdp);
+       return err;
+}
+
 static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev,
                            int type, u32 pid, u32 seq, u32 change,
                            unsigned int flags, u32 ext_filter_mask)
@@ -1307,6 +1345,9 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev,
        if (rtnl_port_fill(skb, dev, ext_filter_mask))
                goto nla_put_failure;
 
+       if (rtnl_xdp_fill(skb, dev))
+               goto nla_put_failure;
+
        if (dev->rtnl_link_ops || rtnl_have_link_slave_info(dev)) {
                if (rtnl_link_fill(skb, dev) < 0)
                        goto nla_put_failure;
@@ -1392,6 +1433,7 @@ static const struct nla_policy ifla_policy[IFLA_MAX+1] = {
        [IFLA_PHYS_SWITCH_ID]   = { .type = NLA_BINARY, .len = MAX_PHYS_ITEM_ID_LEN },
        [IFLA_LINK_NETNSID]     = { .type = NLA_S32 },
        [IFLA_PROTO_DOWN]       = { .type = NLA_U8 },
+       [IFLA_XDP]              = { .type = NLA_NESTED },
 };
 
 static const struct nla_policy ifla_info_policy[IFLA_INFO_MAX+1] = {
@@ -1429,6 +1471,11 @@ static const struct nla_policy ifla_port_policy[IFLA_PORT_MAX+1] = {
        [IFLA_PORT_RESPONSE]    = { .type = NLA_U16, },
 };
 
+static const struct nla_policy ifla_xdp_policy[IFLA_XDP_MAX + 1] = {
+       [IFLA_XDP_FD]           = { .type = NLA_S32 },
+       [IFLA_XDP_ATTACHED]     = { .type = NLA_U8 },
+};
+
 static const struct rtnl_link_ops *linkinfo_to_kind_ops(const struct nlattr *nla)
 {
        const struct rtnl_link_ops *ops = NULL;
@@ -2054,6 +2101,27 @@ static int do_setlink(const struct sk_buff *skb,
                status |= DO_SETLINK_NOTIFY;
        }
 
+       if (tb[IFLA_XDP]) {
+               struct nlattr *xdp[IFLA_XDP_MAX + 1];
+
+               err = nla_parse_nested(xdp, IFLA_XDP_MAX, tb[IFLA_XDP],
+                                      ifla_xdp_policy);
+               if (err < 0)
+                       goto errout;
+
+               if (xdp[IFLA_XDP_ATTACHED]) {
+                       err = -EINVAL;
+                       goto errout;
+               }
+               if (xdp[IFLA_XDP_FD]) {
+                       err = dev_change_xdp_fd(dev,
+                                               nla_get_s32(xdp[IFLA_XDP_FD]));
+                       if (err)
+                               goto errout;
+                       status |= DO_SETLINK_NOTIFY;
+               }
+       }
+
 errout:
        if (status & DO_SETLINK_MODIFIED) {
                if (status & DO_SETLINK_NOTIFY)
index 08bf97e..25dab8b 100644 (file)
@@ -452,11 +452,12 @@ int sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 }
 EXPORT_SYMBOL(sock_queue_rcv_skb);
 
-int sk_receive_skb(struct sock *sk, struct sk_buff *skb, const int nested)
+int __sk_receive_skb(struct sock *sk, struct sk_buff *skb,
+                    const int nested, unsigned int trim_cap)
 {
        int rc = NET_RX_SUCCESS;
 
-       if (sk_filter(sk, skb))
+       if (sk_filter_trim_cap(sk, skb, trim_cap))
                goto discard_and_relse;
 
        skb->dev = NULL;
@@ -492,7 +493,7 @@ discard_and_relse:
        kfree_skb(skb);
        goto out;
 }
-EXPORT_SYMBOL(sk_receive_skb);
+EXPORT_SYMBOL(__sk_receive_skb);
 
 struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie)
 {
@@ -1938,6 +1939,10 @@ int __sock_cmsg_send(struct sock *sk, struct msghdr *msg, struct cmsghdr *cmsg,
                sockc->tsflags &= ~SOF_TIMESTAMPING_TX_RECORD_MASK;
                sockc->tsflags |= tsflags;
                break;
+       /* SCM_RIGHTS and SCM_CREDENTIALS are semantically in SOL_UNIX. */
+       case SCM_RIGHTS:
+       case SCM_CREDENTIALS:
+               break;
        default:
                return -EINVAL;
        }
index 5c7e413..345a3ae 100644 (file)
@@ -462,7 +462,7 @@ static struct dst_entry* dccp_v4_route_skb(struct net *net, struct sock *sk,
        security_skb_classify_flow(skb, flowi4_to_flowi(&fl4));
        rt = ip_route_output_flow(net, &fl4, sk);
        if (IS_ERR(rt)) {
-               __IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
+               IP_INC_STATS(net, IPSTATS_MIB_OUTNOROUTES);
                return NULL;
        }
 
@@ -527,17 +527,19 @@ static void dccp_v4_ctl_send_reset(const struct sock *sk, struct sk_buff *rxskb)
                                                                 rxiph->daddr);
        skb_dst_set(skb, dst_clone(dst));
 
+       local_bh_disable();
        bh_lock_sock(ctl_sk);
        err = ip_build_and_send_pkt(skb, ctl_sk,
                                    rxiph->daddr, rxiph->saddr, NULL);
        bh_unlock_sock(ctl_sk);
 
        if (net_xmit_eval(err) == 0) {
-               DCCP_INC_STATS(DCCP_MIB_OUTSEGS);
-               DCCP_INC_STATS(DCCP_MIB_OUTRSTS);
+               __DCCP_INC_STATS(DCCP_MIB_OUTSEGS);
+               __DCCP_INC_STATS(DCCP_MIB_OUTRSTS);
        }
+       local_bh_enable();
 out:
-        dst_release(dst);
+       dst_release(dst);
 }
 
 static void dccp_v4_reqsk_destructor(struct request_sock *req)
@@ -866,7 +868,7 @@ lookup:
                goto discard_and_relse;
        nf_reset(skb);
 
-       return sk_receive_skb(sk, skb, 1);
+       return __sk_receive_skb(sk, skb, 1, dh->dccph_doff * 4);
 
 no_dccp_socket:
        if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
index d176f4e..3ff137d 100644 (file)
@@ -732,7 +732,7 @@ lookup:
        if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb))
                goto discard_and_relse;
 
-       return sk_receive_skb(sk, skb, 1) ? -1 : 0;
+       return __sk_receive_skb(sk, skb, 1, dh->dccph_doff * 4) ? -1 : 0;
 
 no_dccp_socket:
        if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
index 766d2a5..7e68bc6 100644 (file)
@@ -774,11 +774,17 @@ static int dsa_of_probe(struct device *dev)
 
        chip_index = -1;
        for_each_available_child_of_node(np, child) {
+               int i;
+
                chip_index++;
                cd = &pd->chip[chip_index];
 
                cd->of_node = child;
 
+               /* Initialize the routing table */
+               for (i = 0; i < DSA_MAX_SWITCHES; ++i)
+                       cd->rtable[i] = DSA_RTABLE_NONE;
+
                /* When assigning the host device, increment its refcount */
                cd->host_dev = get_device(&mdio_bus->dev);
 
index 83b95fc..f30bad9 100644 (file)
@@ -595,7 +595,7 @@ static int _dsa_register_switch(struct dsa_switch *ds, struct device_node *np)
        struct device_node *ports = dsa_get_ports(ds, np);
        struct dsa_switch_tree *dst;
        u32 tree, index;
-       int err;
+       int i, err;
 
        err = dsa_parse_member(np, &tree, &index);
        if (err)
@@ -622,6 +622,11 @@ static int _dsa_register_switch(struct dsa_switch *ds, struct device_node *np)
 
        ds->dst = dst;
        ds->index = index;
+
+       /* Initialize the routing table */
+       for (i = 0; i < DSA_MAX_SWITCHES; ++i)
+               ds->rtable[i] = DSA_RTABLE_NONE;
+
        dsa_dst_add_ds(dst, ds, index);
 
        err = dsa_dst_complete(dst);
@@ -672,7 +677,7 @@ int dsa_register_switch(struct dsa_switch *ds, struct device_node *np)
 }
 EXPORT_SYMBOL_GPL(dsa_register_switch);
 
-void _dsa_unregister_switch(struct dsa_switch *ds)
+static void _dsa_unregister_switch(struct dsa_switch *ds)
 {
        struct dsa_switch_tree *dst = ds->dst;
 
index 7236eb2..fc91967 100644 (file)
@@ -333,6 +333,44 @@ static int dsa_slave_vlan_filtering(struct net_device *dev,
        return 0;
 }
 
+static int dsa_fastest_ageing_time(struct dsa_switch *ds,
+                                  unsigned int ageing_time)
+{
+       int i;
+
+       for (i = 0; i < DSA_MAX_PORTS; ++i) {
+               struct dsa_port *dp = &ds->ports[i];
+
+               if (dp && dp->ageing_time && dp->ageing_time < ageing_time)
+                       ageing_time = dp->ageing_time;
+       }
+
+       return ageing_time;
+}
+
+static int dsa_slave_ageing_time(struct net_device *dev,
+                                const struct switchdev_attr *attr,
+                                struct switchdev_trans *trans)
+{
+       struct dsa_slave_priv *p = netdev_priv(dev);
+       struct dsa_switch *ds = p->parent;
+       unsigned long ageing_jiffies = clock_t_to_jiffies(attr->u.ageing_time);
+       unsigned int ageing_time = jiffies_to_msecs(ageing_jiffies);
+
+       /* bridge skips -EOPNOTSUPP, so skip the prepare phase */
+       if (switchdev_trans_ph_prepare(trans))
+               return 0;
+
+       /* Keep the fastest ageing time in case of multiple bridges */
+       ds->ports[p->port].ageing_time = ageing_time;
+       ageing_time = dsa_fastest_ageing_time(ds, ageing_time);
+
+       if (ds->drv->set_ageing_time)
+               return ds->drv->set_ageing_time(ds, ageing_time);
+
+       return 0;
+}
+
 static int dsa_slave_port_attr_set(struct net_device *dev,
                                   const struct switchdev_attr *attr,
                                   struct switchdev_trans *trans)
@@ -346,6 +384,9 @@ static int dsa_slave_port_attr_set(struct net_device *dev,
        case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING:
                ret = dsa_slave_vlan_filtering(dev, attr, trans);
                break;
+       case SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME:
+               ret = dsa_slave_ageing_time(dev, attr, trans);
+               break;
        default:
                ret = -EOPNOTSUPP;
                break;
index 935ab93..d7efbf0 100644 (file)
@@ -130,8 +130,7 @@ static int lowpan_newlink(struct net *src_net, struct net_device *ldev,
 
        pr_debug("adding new link\n");
 
-       if (!tb[IFLA_LINK] ||
-           !net_eq(dev_net(ldev), &init_net))
+       if (!tb[IFLA_LINK])
                return -EINVAL;
        /* find and hold wpan device */
        wdev = dev_get_by_index(dev_net(ldev), nla_get_u32(tb[IFLA_LINK]));
index ef185dd..649e7d4 100644 (file)
@@ -262,7 +262,7 @@ static inline bool lowpan_rx_h_check(struct sk_buff *skb)
 
        /* check on ieee802154 conform 6LoWPAN header */
        if (!ieee802154_is_data(fc) ||
-           !ieee802154_is_intra_pan(fc))
+           !ieee802154_skb_is_intra_pan_addressing(fc, skb))
                return false;
 
        /* check if we can dereference the dispatch */
index c35fdfa..cb7176c 100644 (file)
@@ -140,6 +140,8 @@ wpan_phy_new(const struct cfg802154_ops *ops, size_t priv_size)
        rdev->wpan_phy.dev.class = &wpan_phy_class;
        rdev->wpan_phy.dev.platform_data = rdev;
 
+       wpan_phy_net_set(&rdev->wpan_phy, &init_net);
+
        init_waitqueue_head(&rdev->dev_wait);
 
        return &rdev->wpan_phy;
@@ -207,6 +209,49 @@ void wpan_phy_free(struct wpan_phy *phy)
 }
 EXPORT_SYMBOL(wpan_phy_free);
 
+int cfg802154_switch_netns(struct cfg802154_registered_device *rdev,
+                          struct net *net)
+{
+       struct wpan_dev *wpan_dev;
+       int err = 0;
+
+       list_for_each_entry(wpan_dev, &rdev->wpan_dev_list, list) {
+               if (!wpan_dev->netdev)
+                       continue;
+               wpan_dev->netdev->features &= ~NETIF_F_NETNS_LOCAL;
+               err = dev_change_net_namespace(wpan_dev->netdev, net, "wpan%d");
+               if (err)
+                       break;
+               wpan_dev->netdev->features |= NETIF_F_NETNS_LOCAL;
+       }
+
+       if (err) {
+               /* failed -- clean up to old netns */
+               net = wpan_phy_net(&rdev->wpan_phy);
+
+               list_for_each_entry_continue_reverse(wpan_dev,
+                                                    &rdev->wpan_dev_list,
+                                                    list) {
+                       if (!wpan_dev->netdev)
+                               continue;
+                       wpan_dev->netdev->features &= ~NETIF_F_NETNS_LOCAL;
+                       err = dev_change_net_namespace(wpan_dev->netdev, net,
+                                                      "wpan%d");
+                       WARN_ON(err);
+                       wpan_dev->netdev->features |= NETIF_F_NETNS_LOCAL;
+               }
+
+               return err;
+       }
+
+       wpan_phy_net_set(&rdev->wpan_phy, net);
+
+       err = device_rename(&rdev->wpan_phy.dev, dev_name(&rdev->wpan_phy.dev));
+       WARN_ON(err);
+
+       return 0;
+}
+
 void cfg802154_dev_free(struct cfg802154_registered_device *rdev)
 {
        kfree(rdev);
@@ -286,14 +331,34 @@ static struct notifier_block cfg802154_netdev_notifier = {
        .notifier_call = cfg802154_netdev_notifier_call,
 };
 
+static void __net_exit cfg802154_pernet_exit(struct net *net)
+{
+       struct cfg802154_registered_device *rdev;
+
+       rtnl_lock();
+       list_for_each_entry(rdev, &cfg802154_rdev_list, list) {
+               if (net_eq(wpan_phy_net(&rdev->wpan_phy), net))
+                       WARN_ON(cfg802154_switch_netns(rdev, &init_net));
+       }
+       rtnl_unlock();
+}
+
+static struct pernet_operations cfg802154_pernet_ops = {
+       .exit = cfg802154_pernet_exit,
+};
+
 static int __init wpan_phy_class_init(void)
 {
        int rc;
 
-       rc = wpan_phy_sysfs_init();
+       rc = register_pernet_device(&cfg802154_pernet_ops);
        if (rc)
                goto err;
 
+       rc = wpan_phy_sysfs_init();
+       if (rc)
+               goto err_sysfs;
+
        rc = register_netdevice_notifier(&cfg802154_netdev_notifier);
        if (rc)
                goto err_nl;
@@ -315,6 +380,8 @@ err_notifier:
        unregister_netdevice_notifier(&cfg802154_netdev_notifier);
 err_nl:
        wpan_phy_sysfs_exit();
+err_sysfs:
+       unregister_pernet_device(&cfg802154_pernet_ops);
 err:
        return rc;
 }
@@ -326,6 +393,7 @@ static void __exit wpan_phy_class_exit(void)
        ieee802154_nl_exit();
        unregister_netdevice_notifier(&cfg802154_netdev_notifier);
        wpan_phy_sysfs_exit();
+       unregister_pernet_device(&cfg802154_pernet_ops);
 }
 module_exit(wpan_phy_class_exit);
 
index 231fade..81141f5 100644 (file)
@@ -38,6 +38,8 @@ wpan_phy_to_rdev(struct wpan_phy *wpan_phy)
 extern struct list_head cfg802154_rdev_list;
 extern int cfg802154_rdev_list_generation;
 
+int cfg802154_switch_netns(struct cfg802154_registered_device *rdev,
+                          struct net *net);
 /* free object */
 void cfg802154_dev_free(struct cfg802154_registered_device *rdev);
 struct cfg802154_registered_device *
index 116187b..d90a4ed 100644 (file)
@@ -80,7 +80,8 @@ __cfg802154_wpan_dev_from_attrs(struct net *netns, struct nlattr **attrs)
        list_for_each_entry(rdev, &cfg802154_rdev_list, list) {
                struct wpan_dev *wpan_dev;
 
-               /* TODO netns compare */
+               if (wpan_phy_net(&rdev->wpan_phy) != netns)
+                       continue;
 
                if (have_wpan_dev_id && rdev->wpan_phy_idx != wpan_phy_idx)
                        continue;
@@ -175,7 +176,8 @@ __cfg802154_rdev_from_attrs(struct net *netns, struct nlattr **attrs)
        if (!rdev)
                return ERR_PTR(-ENODEV);
 
-       /* TODO netns compare */
+       if (netns != wpan_phy_net(&rdev->wpan_phy))
+               return ERR_PTR(-ENODEV);
 
        return rdev;
 }
@@ -233,6 +235,8 @@ static const struct nla_policy nl802154_policy[NL802154_ATTR_MAX+1] = {
 
        [NL802154_ATTR_ACKREQ_DEFAULT] = { .type = NLA_U8 },
 
+       [NL802154_ATTR_PID] = { .type = NLA_U32 },
+       [NL802154_ATTR_NETNS_FD] = { .type = NLA_U32 },
 #ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL
        [NL802154_ATTR_SEC_ENABLED] = { .type = NLA_U8, },
        [NL802154_ATTR_SEC_OUT_LEVEL] = { .type = NLA_U32, },
@@ -590,7 +594,6 @@ static int nl802154_dump_wpan_phy_parse(struct sk_buff *skb,
                struct cfg802154_registered_device *rdev;
                int ifidx = nla_get_u32(tb[NL802154_ATTR_IFINDEX]);
 
-               /* TODO netns */
                netdev = __dev_get_by_index(&init_net, ifidx);
                if (!netdev)
                        return -ENODEV;
@@ -629,7 +632,8 @@ nl802154_dump_wpan_phy(struct sk_buff *skb, struct netlink_callback *cb)
        }
 
        list_for_each_entry(rdev, &cfg802154_rdev_list, list) {
-               /* TODO net ns compare */
+               if (!net_eq(wpan_phy_net(&rdev->wpan_phy), sock_net(skb->sk)))
+                       continue;
                if (++idx <= state->start)
                        continue;
                if (state->filter_wpan_phy != -1 &&
@@ -871,7 +875,8 @@ nl802154_dump_interface(struct sk_buff *skb, struct netlink_callback *cb)
 
        rtnl_lock();
        list_for_each_entry(rdev, &cfg802154_rdev_list, list) {
-               /* TODO netns compare */
+               if (!net_eq(wpan_phy_net(&rdev->wpan_phy), sock_net(skb->sk)))
+                       continue;
                if (wp_idx < wp_start) {
                        wp_idx++;
                        continue;
@@ -1271,6 +1276,37 @@ nl802154_set_ackreq_default(struct sk_buff *skb, struct genl_info *info)
        return rdev_set_ackreq_default(rdev, wpan_dev, ackreq);
 }
 
+static int nl802154_wpan_phy_netns(struct sk_buff *skb, struct genl_info *info)
+{
+       struct cfg802154_registered_device *rdev = info->user_ptr[0];
+       struct net *net;
+       int err;
+
+       if (info->attrs[NL802154_ATTR_PID]) {
+               u32 pid = nla_get_u32(info->attrs[NL802154_ATTR_PID]);
+
+               net = get_net_ns_by_pid(pid);
+       } else if (info->attrs[NL802154_ATTR_NETNS_FD]) {
+               u32 fd = nla_get_u32(info->attrs[NL802154_ATTR_NETNS_FD]);
+
+               net = get_net_ns_by_fd(fd);
+       } else {
+               return -EINVAL;
+       }
+
+       if (IS_ERR(net))
+               return PTR_ERR(net);
+
+       err = 0;
+
+       /* check if anything to do */
+       if (!net_eq(wpan_phy_net(&rdev->wpan_phy), net))
+               err = cfg802154_switch_netns(rdev, net);
+
+       put_net(net);
+       return err;
+}
+
 #ifdef CONFIG_IEEE802154_NL802154_EXPERIMENTAL
 static const struct nla_policy nl802154_dev_addr_policy[NL802154_DEV_ADDR_ATTR_MAX + 1] = {
        [NL802154_DEV_ADDR_ATTR_PAN_ID] = { .type = NLA_U16 },
@@ -2261,6 +2297,14 @@ static const struct genl_ops nl802154_ops[] = {
                .internal_flags = NL802154_FLAG_NEED_WPAN_PHY |
                                  NL802154_FLAG_NEED_RTNL,
        },
+       {
+               .cmd = NL802154_CMD_SET_WPAN_PHY_NETNS,
+               .doit = nl802154_wpan_phy_netns,
+               .policy = nl802154_policy,
+               .flags = GENL_ADMIN_PERM,
+               .internal_flags = NL802154_FLAG_NEED_WPAN_PHY |
+                                 NL802154_FLAG_NEED_RTNL,
+       },
        {
                .cmd = NL802154_CMD_SET_PAN_ID,
                .doit = nl802154_set_pan_id,
index d39e9e4..55513e6 100644 (file)
@@ -73,7 +73,7 @@
 #include <linux/socket.h>
 #include <linux/in.h>
 #include <linux/kernel.h>
-#include <linux/module.h>
+#include <linux/kmod.h>
 #include <linux/sched.h>
 #include <linux/timer.h>
 #include <linux/string.h>
@@ -1916,6 +1916,3 @@ static int __init ipv4_proc_init(void)
        return 0;
 }
 #endif /* CONFIG_PROC_FS */
-
-MODULE_ALIAS_NETPROTO(PF_INET);
-
index e333bc8..415e117 100644 (file)
@@ -1834,7 +1834,7 @@ void inet_netconf_notify_devconf(struct net *net, int type, int ifindex,
        struct sk_buff *skb;
        int err = -ENOBUFS;
 
-       skb = nlmsg_new(inet_netconf_msgsize_devconf(type), GFP_ATOMIC);
+       skb = nlmsg_new(inet_netconf_msgsize_devconf(type), GFP_KERNEL);
        if (!skb)
                goto errout;
 
@@ -1846,7 +1846,7 @@ void inet_netconf_notify_devconf(struct net *net, int type, int ifindex,
                kfree_skb(skb);
                goto errout;
        }
-       rtnl_notify(skb, net, 0, RTNLGRP_IPV4_NETCONF, NULL, GFP_ATOMIC);
+       rtnl_notify(skb, net, 0, RTNLGRP_IPV4_NETCONF, NULL, GFP_KERNEL);
        return;
 errout:
        if (err < 0)
@@ -1903,7 +1903,7 @@ static int inet_netconf_get_devconf(struct sk_buff *in_skb,
        }
 
        err = -ENOBUFS;
-       skb = nlmsg_new(inet_netconf_msgsize_devconf(NETCONFA_ALL), GFP_ATOMIC);
+       skb = nlmsg_new(inet_netconf_msgsize_devconf(NETCONFA_ALL), GFP_KERNEL);
        if (!skb)
                goto errout;
 
@@ -2027,16 +2027,16 @@ static void inet_forward_change(struct net *net)
 
        for_each_netdev(net, dev) {
                struct in_device *in_dev;
+
                if (on)
                        dev_disable_lro(dev);
-               rcu_read_lock();
-               in_dev = __in_dev_get_rcu(dev);
+
+               in_dev = __in_dev_get_rtnl(dev);
                if (in_dev) {
                        IN_DEV_CONF_SET(in_dev, FORWARDING, on);
                        inet_netconf_notify_devconf(net, NETCONFA_FORWARDING,
                                                    dev->ifindex, &in_dev->cnf);
                }
-               rcu_read_unlock();
        }
 }
 
index d09173b..539fa26 100644 (file)
@@ -479,6 +479,9 @@ static int fib_get_nhs(struct fib_info *fi, struct rtnexthop *rtnh,
                if (!rtnh_ok(rtnh, remaining))
                        return -EINVAL;
 
+               if (rtnh->rtnh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN))
+                       return -EINVAL;
+
                nexthop_nh->nh_flags =
                        (cfg->fc_flags & ~0xFF) | rtnh->rtnh_flags;
                nexthop_nh->nh_oif = rtnh->rtnh_ifindex;
@@ -1003,6 +1006,9 @@ struct fib_info *fib_create_info(struct fib_config *cfg)
        if (fib_props[cfg->fc_type].scope > cfg->fc_scope)
                goto err_inval;
 
+       if (cfg->fc_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN))
+               goto err_inval;
+
 #ifdef CONFIG_IP_ROUTE_MULTIPATH
        if (cfg->fc_mp) {
                nhs = fib_count_nexthops(cfg->fc_mp, cfg->fc_mp_len);
index 9f0a7b9..8b4ffd2 100644 (file)
@@ -117,7 +117,7 @@ int ip_forward(struct sk_buff *skb)
        if (opt->is_strictroute && rt->rt_uses_gateway)
                goto sr_failed;
 
-       IPCB(skb)->flags |= IPSKB_FORWARDED;
+       IPCB(skb)->flags |= IPSKB_FORWARDED | IPSKB_FRAG_SEGS;
        mtu = ip_dst_mtu_maybe_forward(&rt->dst, true);
        if (ip_exceeds_mtu(skb, mtu)) {
                IP_INC_STATS(net, IPSTATS_MIB_FRAGFAILS);
index e23f141..dde37fb 100644 (file)
@@ -223,8 +223,10 @@ static int ip_finish_output_gso(struct net *net, struct sock *sk,
        struct sk_buff *segs;
        int ret = 0;
 
-       /* common case: locally created skb or seglen is <= mtu */
-       if (((IPCB(skb)->flags & IPSKB_FORWARDED) == 0) ||
+       /* common case: fragmentation of segments is not allowed,
+        * or seglen is <= mtu
+        */
+       if (((IPCB(skb)->flags & IPSKB_FRAG_SEGS) == 0) ||
              skb_gso_validate_mtu(skb, mtu))
                return ip_finish_output2(net, sk, skb);
 
index afd6b59..9d847c3 100644 (file)
@@ -63,6 +63,7 @@ void iptunnel_xmit(struct sock *sk, struct rtable *rt, struct sk_buff *skb,
        int pkt_len = skb->len - skb_inner_network_offset(skb);
        struct net *net = dev_net(rt->dst.dev);
        struct net_device *dev = skb->dev;
+       int skb_iif = skb->skb_iif;
        struct iphdr *iph;
        int err;
 
@@ -72,6 +73,14 @@ void iptunnel_xmit(struct sock *sk, struct rtable *rt, struct sk_buff *skb,
        skb_dst_set(skb, &rt->dst);
        memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
 
+       if (skb_iif && proto == IPPROTO_UDP) {
+               /* Arrived from an ingress interface and got udp encapuslated.
+                * The encapsulated network segment length may exceed dst mtu.
+                * Allow IP Fragmentation of segments.
+                */
+               IPCB(skb)->flags |= IPSKB_FRAG_SEGS;
+       }
+
        /* Push down and install the IP header. */
        skb_push(skb, sizeof(struct iphdr));
        skb_reset_network_header(skb);
index 9783701..4ae3f8e 100644 (file)
@@ -148,14 +148,14 @@ static int ipip_err(struct sk_buff *skb, u32 info)
 
        if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) {
                ipv4_update_pmtu(skb, dev_net(skb->dev), info,
-                                t->parms.link, 0, IPPROTO_IPIP, 0);
+                                t->parms.link, 0, iph->protocol, 0);
                err = 0;
                goto out;
        }
 
        if (type == ICMP_REDIRECT) {
                ipv4_redirect(skb, dev_net(skb->dev), t->parms.link, 0,
-                             IPPROTO_IPIP, 0);
+                             iph->protocol, 0);
                err = 0;
                goto out;
        }
@@ -177,12 +177,19 @@ out:
        return err;
 }
 
-static const struct tnl_ptk_info tpi = {
+static const struct tnl_ptk_info ipip_tpi = {
        /* no tunnel info required for ipip. */
        .proto = htons(ETH_P_IP),
 };
 
-static int ipip_rcv(struct sk_buff *skb)
+#if IS_ENABLED(CONFIG_MPLS)
+static const struct tnl_ptk_info mplsip_tpi = {
+       /* no tunnel info required for mplsip. */
+       .proto = htons(ETH_P_MPLS_UC),
+};
+#endif
+
+static int ipip_tunnel_rcv(struct sk_buff *skb, u8 ipproto)
 {
        struct net *net = dev_net(skb->dev);
        struct ip_tunnel_net *itn = net_generic(net, ipip_net_id);
@@ -193,11 +200,23 @@ static int ipip_rcv(struct sk_buff *skb)
        tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex, TUNNEL_NO_KEY,
                        iph->saddr, iph->daddr, 0);
        if (tunnel) {
+               const struct tnl_ptk_info *tpi;
+
+               if (tunnel->parms.iph.protocol != ipproto &&
+                   tunnel->parms.iph.protocol != 0)
+                       goto drop;
+
                if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
                        goto drop;
-               if (iptunnel_pull_header(skb, 0, tpi.proto, false))
+#if IS_ENABLED(CONFIG_MPLS)
+               if (ipproto == IPPROTO_MPLS)
+                       tpi = &mplsip_tpi;
+               else
+#endif
+                       tpi = &ipip_tpi;
+               if (iptunnel_pull_header(skb, 0, tpi->proto, false))
                        goto drop;
-               return ip_tunnel_rcv(tunnel, skb, &tpi, NULL, log_ecn_error);
+               return ip_tunnel_rcv(tunnel, skb, tpi, NULL, log_ecn_error);
        }
 
        return -1;
@@ -207,24 +226,51 @@ drop:
        return 0;
 }
 
+static int ipip_rcv(struct sk_buff *skb)
+{
+       return ipip_tunnel_rcv(skb, IPPROTO_IPIP);
+}
+
+#if IS_ENABLED(CONFIG_MPLS)
+static int mplsip_rcv(struct sk_buff *skb)
+{
+       return ipip_tunnel_rcv(skb, IPPROTO_MPLS);
+}
+#endif
+
 /*
  *     This function assumes it is being called from dev_queue_xmit()
  *     and that skb is filled properly by that function.
  */
-static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
+static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb,
+                                   struct net_device *dev)
 {
        struct ip_tunnel *tunnel = netdev_priv(dev);
        const struct iphdr  *tiph = &tunnel->parms.iph;
+       u8 ipproto;
+
+       switch (skb->protocol) {
+       case htons(ETH_P_IP):
+               ipproto = IPPROTO_IPIP;
+               break;
+#if IS_ENABLED(CONFIG_MPLS)
+       case htons(ETH_P_MPLS_UC):
+               ipproto = IPPROTO_MPLS;
+               break;
+#endif
+       default:
+               goto tx_error;
+       }
 
-       if (unlikely(skb->protocol != htons(ETH_P_IP)))
+       if (tiph->protocol != ipproto && tiph->protocol != 0)
                goto tx_error;
 
        if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP4))
                goto tx_error;
 
-       skb_set_inner_ipproto(skb, IPPROTO_IPIP);
+       skb_set_inner_ipproto(skb, ipproto);
 
-       ip_tunnel_xmit(skb, dev, tiph, tiph->protocol);
+       ip_tunnel_xmit(skb, dev, tiph, ipproto);
        return NETDEV_TX_OK;
 
 tx_error:
@@ -234,6 +280,20 @@ tx_error:
        return NETDEV_TX_OK;
 }
 
+static bool ipip_tunnel_ioctl_verify_protocol(u8 ipproto)
+{
+       switch (ipproto) {
+       case 0:
+       case IPPROTO_IPIP:
+#if IS_ENABLED(CONFIG_MPLS)
+       case IPPROTO_MPLS:
+#endif
+               return true;
+       }
+
+       return false;
+}
+
 static int
 ipip_tunnel_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
 {
@@ -244,7 +304,8 @@ ipip_tunnel_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
                return -EFAULT;
 
        if (cmd == SIOCADDTUNNEL || cmd == SIOCCHGTUNNEL) {
-               if (p.iph.version != 4 || p.iph.protocol != IPPROTO_IPIP ||
+               if (p.iph.version != 4 ||
+                   !ipip_tunnel_ioctl_verify_protocol(p.iph.protocol) ||
                    p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF)))
                        return -EINVAL;
        }
@@ -301,10 +362,23 @@ static int ipip_tunnel_init(struct net_device *dev)
 
        tunnel->tun_hlen = 0;
        tunnel->hlen = tunnel->tun_hlen + tunnel->encap_hlen;
-       tunnel->parms.iph.protocol = IPPROTO_IPIP;
        return ip_tunnel_init(dev);
 }
 
+static int ipip_tunnel_validate(struct nlattr *tb[], struct nlattr *data[])
+{
+       u8 proto;
+
+       if (!data || !data[IFLA_IPTUN_PROTO])
+               return 0;
+
+       proto = nla_get_u8(data[IFLA_IPTUN_PROTO]);
+       if (proto != IPPROTO_IPIP && proto != IPPROTO_MPLS && proto != 0)
+               return -EINVAL;
+
+       return 0;
+}
+
 static void ipip_netlink_parms(struct nlattr *data[],
                               struct ip_tunnel_parm *parms)
 {
@@ -335,6 +409,9 @@ static void ipip_netlink_parms(struct nlattr *data[],
        if (data[IFLA_IPTUN_TOS])
                parms->iph.tos = nla_get_u8(data[IFLA_IPTUN_TOS]);
 
+       if (data[IFLA_IPTUN_PROTO])
+               parms->iph.protocol = nla_get_u8(data[IFLA_IPTUN_PROTO]);
+
        if (!data[IFLA_IPTUN_PMTUDISC] || nla_get_u8(data[IFLA_IPTUN_PMTUDISC]))
                parms->iph.frag_off = htons(IP_DF);
 }
@@ -427,6 +504,8 @@ static size_t ipip_get_size(const struct net_device *dev)
                nla_total_size(1) +
                /* IFLA_IPTUN_TOS */
                nla_total_size(1) +
+               /* IFLA_IPTUN_PROTO */
+               nla_total_size(1) +
                /* IFLA_IPTUN_PMTUDISC */
                nla_total_size(1) +
                /* IFLA_IPTUN_ENCAP_TYPE */
@@ -450,6 +529,7 @@ static int ipip_fill_info(struct sk_buff *skb, const struct net_device *dev)
            nla_put_in_addr(skb, IFLA_IPTUN_REMOTE, parm->iph.daddr) ||
            nla_put_u8(skb, IFLA_IPTUN_TTL, parm->iph.ttl) ||
            nla_put_u8(skb, IFLA_IPTUN_TOS, parm->iph.tos) ||
+           nla_put_u8(skb, IFLA_IPTUN_PROTO, parm->iph.protocol) ||
            nla_put_u8(skb, IFLA_IPTUN_PMTUDISC,
                       !!(parm->iph.frag_off & htons(IP_DF))))
                goto nla_put_failure;
@@ -476,6 +556,7 @@ static const struct nla_policy ipip_policy[IFLA_IPTUN_MAX + 1] = {
        [IFLA_IPTUN_REMOTE]             = { .type = NLA_U32 },
        [IFLA_IPTUN_TTL]                = { .type = NLA_U8 },
        [IFLA_IPTUN_TOS]                = { .type = NLA_U8 },
+       [IFLA_IPTUN_PROTO]              = { .type = NLA_U8 },
        [IFLA_IPTUN_PMTUDISC]           = { .type = NLA_U8 },
        [IFLA_IPTUN_ENCAP_TYPE]         = { .type = NLA_U16 },
        [IFLA_IPTUN_ENCAP_FLAGS]        = { .type = NLA_U16 },
@@ -489,6 +570,7 @@ static struct rtnl_link_ops ipip_link_ops __read_mostly = {
        .policy         = ipip_policy,
        .priv_size      = sizeof(struct ip_tunnel),
        .setup          = ipip_tunnel_setup,
+       .validate       = ipip_tunnel_validate,
        .newlink        = ipip_newlink,
        .changelink     = ipip_changelink,
        .dellink        = ip_tunnel_dellink,
@@ -503,6 +585,14 @@ static struct xfrm_tunnel ipip_handler __read_mostly = {
        .priority       =       1,
 };
 
+#if IS_ENABLED(CONFIG_MPLS)
+static struct xfrm_tunnel mplsip_handler __read_mostly = {
+       .handler        =       mplsip_rcv,
+       .err_handler    =       ipip_err,
+       .priority       =       1,
+};
+#endif
+
 static int __net_init ipip_init_net(struct net *net)
 {
        return ip_tunnel_init_net(net, ipip_net_id, &ipip_link_ops, "tunl0");
@@ -525,7 +615,7 @@ static int __init ipip_init(void)
 {
        int err;
 
-       pr_info("ipip: IPv4 over IPv4 tunneling driver\n");
+       pr_info("ipip: IPv4 and MPLS over IPv4 tunneling driver\n");
 
        err = register_pernet_device(&ipip_net_ops);
        if (err < 0)
@@ -533,8 +623,15 @@ static int __init ipip_init(void)
        err = xfrm4_tunnel_register(&ipip_handler, AF_INET);
        if (err < 0) {
                pr_info("%s: can't register tunnel\n", __func__);
-               goto xfrm_tunnel_failed;
+               goto xfrm_tunnel_ipip_failed;
+       }
+#if IS_ENABLED(CONFIG_MPLS)
+       err = xfrm4_tunnel_register(&mplsip_handler, AF_MPLS);
+       if (err < 0) {
+               pr_info("%s: can't register tunnel\n", __func__);
+               goto xfrm_tunnel_mplsip_failed;
        }
+#endif
        err = rtnl_link_register(&ipip_link_ops);
        if (err < 0)
                goto rtnl_link_failed;
@@ -543,8 +640,13 @@ out:
        return err;
 
 rtnl_link_failed:
+#if IS_ENABLED(CONFIG_MPLS)
+       xfrm4_tunnel_deregister(&mplsip_handler, AF_INET);
+xfrm_tunnel_mplsip_failed:
+
+#endif
        xfrm4_tunnel_deregister(&ipip_handler, AF_INET);
-xfrm_tunnel_failed:
+xfrm_tunnel_ipip_failed:
        unregister_pernet_device(&ipip_net_ops);
        goto out;
 }
@@ -554,7 +656,10 @@ static void __exit ipip_fini(void)
        rtnl_link_unregister(&ipip_link_ops);
        if (xfrm4_tunnel_deregister(&ipip_handler, AF_INET))
                pr_info("%s: can't deregister tunnel\n", __func__);
-
+#if IS_ENABLED(CONFIG_MPLS)
+       if (xfrm4_tunnel_deregister(&mplsip_handler, AF_MPLS))
+               pr_info("%s: can't deregister tunnel\n", __func__);
+#endif
        unregister_pernet_device(&ipip_net_ops);
 }
 
index 5ad48ec..eec2341 100644 (file)
@@ -1150,6 +1150,7 @@ static int ipmr_mfc_add(struct net *net, struct mr_table *mrt,
        c->mfc_origin = mfc->mfcc_origin.s_addr;
        c->mfc_mcastgrp = mfc->mfcc_mcastgrp.s_addr;
        c->mfc_parent = mfc->mfcc_parent;
+       c->mfc_un.res.lastuse = jiffies;
        ipmr_update_thresholds(mrt, c, mfc->mfcc_ttls);
        if (!mrtsock)
                c->mfc_flags |= MFC_STATIC;
@@ -1748,7 +1749,7 @@ static void ipmr_queue_xmit(struct net *net, struct mr_table *mrt,
                vif->dev->stats.tx_bytes += skb->len;
        }
 
-       IPCB(skb)->flags |= IPSKB_FORWARDED;
+       IPCB(skb)->flags |= IPSKB_FORWARDED | IPSKB_FRAG_SEGS;
 
        /* RFC1584 teaches, that DVMRP/PIM router must deliver packets locally
         * not only before forwarding, but after forwarding on all output
@@ -1792,6 +1793,7 @@ static void ip_mr_forward(struct net *net, struct mr_table *mrt,
        vif = cache->mfc_parent;
        cache->mfc_un.res.pkt++;
        cache->mfc_un.res.bytes += skb->len;
+       cache->mfc_un.res.lastuse = jiffies;
 
        if (cache->mfc_origin == htonl(INADDR_ANY) && true_vifi >= 0) {
                struct mfc_cache *cache_proxy;
@@ -2071,10 +2073,10 @@ drop:
 static int __ipmr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb,
                              struct mfc_cache *c, struct rtmsg *rtm)
 {
-       int ct;
-       struct rtnexthop *nhp;
-       struct nlattr *mp_attr;
        struct rta_mfc_stats mfcs;
+       struct nlattr *mp_attr;
+       struct rtnexthop *nhp;
+       int ct;
 
        /* If cache is unresolved, don't try to parse IIF and OIF */
        if (c->mfc_parent >= MAXVIFS)
@@ -2106,7 +2108,10 @@ static int __ipmr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb,
        mfcs.mfcs_packets = c->mfc_un.res.pkt;
        mfcs.mfcs_bytes = c->mfc_un.res.bytes;
        mfcs.mfcs_wrong_if = c->mfc_un.res.wrong_if;
-       if (nla_put_64bit(skb, RTA_MFC_STATS, sizeof(mfcs), &mfcs, RTA_PAD) < 0)
+       if (nla_put_64bit(skb, RTA_MFC_STATS, sizeof(mfcs), &mfcs, RTA_PAD) ||
+           nla_put_u64_64bit(skb, RTA_EXPIRES,
+                             jiffies_to_clock_t(c->mfc_un.res.lastuse),
+                             RTA_PAD))
                return -EMSGSIZE;
 
        rtm->rtm_type = RTN_MULTICAST;
index 94d4aff..f9f9e37 100644 (file)
@@ -87,7 +87,7 @@ int sysctl_tcp_adv_win_scale __read_mostly = 1;
 EXPORT_SYMBOL(sysctl_tcp_adv_win_scale);
 
 /* rfc5961 challenge ack rate limiting */
-int sysctl_tcp_challenge_ack_limit = 100;
+int sysctl_tcp_challenge_ack_limit = 1000;
 
 int sysctl_tcp_stdurg __read_mostly;
 int sysctl_tcp_rfc1337 __read_mostly;
@@ -3424,6 +3424,23 @@ static int tcp_ack_update_window(struct sock *sk, const struct sk_buff *skb, u32
        return flag;
 }
 
+static bool __tcp_oow_rate_limited(struct net *net, int mib_idx,
+                                  u32 *last_oow_ack_time)
+{
+       if (*last_oow_ack_time) {
+               s32 elapsed = (s32)(tcp_time_stamp - *last_oow_ack_time);
+
+               if (0 <= elapsed && elapsed < sysctl_tcp_invalid_ratelimit) {
+                       NET_INC_STATS(net, mib_idx);
+                       return true;    /* rate-limited: don't send yet! */
+               }
+       }
+
+       *last_oow_ack_time = tcp_time_stamp;
+
+       return false;   /* not rate-limited: go ahead, send dupack now! */
+}
+
 /* Return true if we're currently rate-limiting out-of-window ACKs and
  * thus shouldn't send a dupack right now. We rate-limit dupacks in
  * response to out-of-window SYNs or ACKs to mitigate ACK loops or DoS
@@ -3437,21 +3454,9 @@ bool tcp_oow_rate_limited(struct net *net, const struct sk_buff *skb,
        /* Data packets without SYNs are not likely part of an ACK loop. */
        if ((TCP_SKB_CB(skb)->seq != TCP_SKB_CB(skb)->end_seq) &&
            !tcp_hdr(skb)->syn)
-               goto not_rate_limited;
-
-       if (*last_oow_ack_time) {
-               s32 elapsed = (s32)(tcp_time_stamp - *last_oow_ack_time);
-
-               if (0 <= elapsed && elapsed < sysctl_tcp_invalid_ratelimit) {
-                       NET_INC_STATS(net, mib_idx);
-                       return true;    /* rate-limited: don't send yet! */
-               }
-       }
-
-       *last_oow_ack_time = tcp_time_stamp;
+               return false;
 
-not_rate_limited:
-       return false;   /* not rate-limited: go ahead, send dupack now! */
+       return __tcp_oow_rate_limited(net, mib_idx, last_oow_ack_time);
 }
 
 /* RFC 5961 7 [ACK Throttling] */
@@ -3461,21 +3466,26 @@ static void tcp_send_challenge_ack(struct sock *sk, const struct sk_buff *skb)
        static u32 challenge_timestamp;
        static unsigned int challenge_count;
        struct tcp_sock *tp = tcp_sk(sk);
-       u32 now;
+       u32 count, now;
 
        /* First check our per-socket dupack rate limit. */
-       if (tcp_oow_rate_limited(sock_net(sk), skb,
-                                LINUX_MIB_TCPACKSKIPPEDCHALLENGE,
-                                &tp->last_oow_ack_time))
+       if (__tcp_oow_rate_limited(sock_net(sk),
+                                  LINUX_MIB_TCPACKSKIPPEDCHALLENGE,
+                                  &tp->last_oow_ack_time))
                return;
 
-       /* Then check the check host-wide RFC 5961 rate limit. */
+       /* Then check host-wide RFC 5961 rate limit. */
        now = jiffies / HZ;
        if (now != challenge_timestamp) {
+               u32 half = (sysctl_tcp_challenge_ack_limit + 1) >> 1;
+
                challenge_timestamp = now;
-               challenge_count = 0;
+               WRITE_ONCE(challenge_count, half +
+                          prandom_u32_max(sysctl_tcp_challenge_ack_limit));
        }
-       if (++challenge_count <= sysctl_tcp_challenge_ack_limit) {
+       count = READ_ONCE(challenge_count);
+       if (count > 0) {
+               WRITE_ONCE(challenge_count, count - 1);
                NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK);
                tcp_send_ack(sk);
        }
index debdd8b..d84930b 100644 (file)
 
 int sysctl_tcp_thin_linear_timeouts __read_mostly;
 
+/**
+ *  tcp_write_err() - close socket and save error info
+ *  @sk:  The socket the error has appeared on.
+ *
+ *  Returns: Nothing (void)
+ */
+
 static void tcp_write_err(struct sock *sk)
 {
        sk->sk_err = sk->sk_err_soft ? : ETIMEDOUT;
@@ -33,16 +40,21 @@ static void tcp_write_err(struct sock *sk)
        __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONTIMEOUT);
 }
 
-/* Do not allow orphaned sockets to eat all our resources.
- * This is direct violation of TCP specs, but it is required
- * to prevent DoS attacks. It is called when a retransmission timeout
- * or zero probe timeout occurs on orphaned socket.
+/**
+ *  tcp_out_of_resources() - Close socket if out of resources
+ *  @sk:        pointer to current socket
+ *  @do_reset:  send a last packet with reset flag
  *
- * Criteria is still not confirmed experimentally and may change.
- * We kill the socket, if:
- * 1. If number of orphaned sockets exceeds an administratively configured
- *    limit.
- * 2. If we have strong memory pressure.
+ *  Do not allow orphaned sockets to eat all our resources.
+ *  This is direct violation of TCP specs, but it is required
+ *  to prevent DoS attacks. It is called when a retransmission timeout
+ *  or zero probe timeout occurs on orphaned socket.
+ *
+ *  Criteria is still not confirmed experimentally and may change.
+ *  We kill the socket, if:
+ *  1. If number of orphaned sockets exceeds an administratively configured
+ *     limit.
+ *  2. If we have strong memory pressure.
  */
 static int tcp_out_of_resources(struct sock *sk, bool do_reset)
 {
@@ -74,7 +86,11 @@ static int tcp_out_of_resources(struct sock *sk, bool do_reset)
        return 0;
 }
 
-/* Calculate maximal number or retries on an orphaned socket. */
+/**
+ *  tcp_orphan_retries() - Returns maximal number of retries on an orphaned socket
+ *  @sk:    Pointer to the current socket.
+ *  @alive: bool, socket alive state
+ */
 static int tcp_orphan_retries(struct sock *sk, bool alive)
 {
        int retries = sock_net(sk)->ipv4.sysctl_tcp_orphan_retries; /* May be zero. */
@@ -115,10 +131,22 @@ static void tcp_mtu_probing(struct inet_connection_sock *icsk, struct sock *sk)
        }
 }
 
-/* This function calculates a "timeout" which is equivalent to the timeout of a
- * TCP connection after "boundary" unsuccessful, exponentially backed-off
+
+/**
+ *  retransmits_timed_out() - returns true if this connection has timed out
+ *  @sk:       The current socket
+ *  @boundary: max number of retransmissions
+ *  @timeout:  A custom timeout value.
+ *             If set to 0 the default timeout is calculated and used.
+ *             Using TCP_RTO_MIN and the number of unsuccessful retransmits.
+ *  @syn_set:  true if the SYN Bit was set.
+ *
+ * The default "timeout" value this function can calculate and use
+ * is equivalent to the timeout of a TCP Connection
+ * after "boundary" unsuccessful, exponentially backed-off
  * retransmissions with an initial RTO of TCP_RTO_MIN or TCP_TIMEOUT_INIT if
  * syn_set flag is set.
+ *
  */
 static bool retransmits_timed_out(struct sock *sk,
                                  unsigned int boundary,
@@ -257,6 +285,16 @@ out:
                sk_mem_reclaim(sk);
 }
 
+
+/**
+ *  tcp_delack_timer() - The TCP delayed ACK timeout handler
+ *  @data:  Pointer to the current socket. (gets casted to struct sock *)
+ *
+ *  This function gets (indirectly) called when the kernel timer for a TCP packet
+ *  of this socket expires. Calls tcp_delack_timer_handler() to do the actual work.
+ *
+ *  Returns: Nothing (void)
+ */
 static void tcp_delack_timer(unsigned long data)
 {
        struct sock *sk = (struct sock *)data;
@@ -350,10 +388,18 @@ static void tcp_fastopen_synack_timer(struct sock *sk)
                          TCP_TIMEOUT_INIT << req->num_timeout, TCP_RTO_MAX);
 }
 
-/*
- *     The TCP retransmit timer.
- */
 
+/**
+ *  tcp_retransmit_timer() - The TCP retransmit timeout handler
+ *  @sk:  Pointer to the current socket.
+ *
+ *  This function gets called when the kernel timer for a TCP packet
+ *  of this socket expires.
+ *
+ *  It handles retransmission, timer adjustment and other necesarry measures.
+ *
+ *  Returns: Nothing (void)
+ */
 void tcp_retransmit_timer(struct sock *sk)
 {
        struct tcp_sock *tp = tcp_sk(sk);
@@ -494,7 +540,8 @@ out_reset_timer:
 out:;
 }
 
-/* Called with BH disabled */
+/* Called with bottom-half processing disabled.
+   Called by tcp_write_timer() */
 void tcp_write_timer_handler(struct sock *sk)
 {
        struct inet_connection_sock *icsk = inet_csk(sk);
@@ -539,7 +586,7 @@ static void tcp_write_timer(unsigned long data)
        if (!sock_owned_by_user(sk)) {
                tcp_write_timer_handler(sk);
        } else {
-               /* deleguate our work to tcp_release_cb() */
+               /* delegate our work to tcp_release_cb() */
                if (!test_and_set_bit(TCP_WRITE_TIMER_DEFERRED, &tcp_sk(sk)->tsq_flags))
                        sock_hold(sk);
        }
index 0d01718..ec35eaa 100644 (file)
@@ -6,6 +6,7 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
+#include <linux/mpls.h>
 #include <linux/netdevice.h>
 #include <linux/skbuff.h>
 #include <linux/slab.h>
 
 static struct xfrm_tunnel __rcu *tunnel4_handlers __read_mostly;
 static struct xfrm_tunnel __rcu *tunnel64_handlers __read_mostly;
+static struct xfrm_tunnel __rcu *tunnelmpls4_handlers __read_mostly;
 static DEFINE_MUTEX(tunnel4_mutex);
 
 static inline struct xfrm_tunnel __rcu **fam_handlers(unsigned short family)
 {
-       return (family == AF_INET) ? &tunnel4_handlers : &tunnel64_handlers;
+       return (family == AF_INET) ? &tunnel4_handlers :
+               (family == AF_INET6) ? &tunnel64_handlers :
+               &tunnelmpls4_handlers;
 }
 
 int xfrm4_tunnel_register(struct xfrm_tunnel *handler, unsigned short family)
@@ -125,6 +129,26 @@ drop:
 }
 #endif
 
+#if IS_ENABLED(CONFIG_MPLS)
+static int tunnelmpls4_rcv(struct sk_buff *skb)
+{
+       struct xfrm_tunnel *handler;
+
+       if (!pskb_may_pull(skb, sizeof(struct mpls_label)))
+               goto drop;
+
+       for_each_tunnel_rcu(tunnelmpls4_handlers, handler)
+               if (!handler->handler(skb))
+                       return 0;
+
+       icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PORT_UNREACH, 0);
+
+drop:
+       kfree_skb(skb);
+       return 0;
+}
+#endif
+
 static void tunnel4_err(struct sk_buff *skb, u32 info)
 {
        struct xfrm_tunnel *handler;
@@ -145,6 +169,17 @@ static void tunnel64_err(struct sk_buff *skb, u32 info)
 }
 #endif
 
+#if IS_ENABLED(CONFIG_MPLS)
+static void tunnelmpls4_err(struct sk_buff *skb, u32 info)
+{
+       struct xfrm_tunnel *handler;
+
+       for_each_tunnel_rcu(tunnelmpls4_handlers, handler)
+               if (!handler->err_handler(skb, info))
+                       break;
+}
+#endif
+
 static const struct net_protocol tunnel4_protocol = {
        .handler        =       tunnel4_rcv,
        .err_handler    =       tunnel4_err,
@@ -161,24 +196,47 @@ static const struct net_protocol tunnel64_protocol = {
 };
 #endif
 
+#if IS_ENABLED(CONFIG_MPLS)
+static const struct net_protocol tunnelmpls4_protocol = {
+       .handler        =       tunnelmpls4_rcv,
+       .err_handler    =       tunnelmpls4_err,
+       .no_policy      =       1,
+       .netns_ok       =       1,
+};
+#endif
+
 static int __init tunnel4_init(void)
 {
-       if (inet_add_protocol(&tunnel4_protocol, IPPROTO_IPIP)) {
-               pr_err("%s: can't add protocol\n", __func__);
-               return -EAGAIN;
-       }
+       if (inet_add_protocol(&tunnel4_protocol, IPPROTO_IPIP))
+               goto err;
 #if IS_ENABLED(CONFIG_IPV6)
        if (inet_add_protocol(&tunnel64_protocol, IPPROTO_IPV6)) {
-               pr_err("tunnel64 init: can't add protocol\n");
                inet_del_protocol(&tunnel4_protocol, IPPROTO_IPIP);
-               return -EAGAIN;
+               goto err;
+       }
+#endif
+#if IS_ENABLED(CONFIG_MPLS)
+       if (inet_add_protocol(&tunnelmpls4_protocol, IPPROTO_MPLS)) {
+               inet_del_protocol(&tunnel4_protocol, IPPROTO_IPIP);
+#if IS_ENABLED(CONFIG_IPV6)
+               inet_del_protocol(&tunnel64_protocol, IPPROTO_IPV6);
+#endif
+               goto err;
        }
 #endif
        return 0;
+
+err:
+       pr_err("%s: can't add protocol\n", __func__);
+       return -EAGAIN;
 }
 
 static void __exit tunnel4_fini(void)
 {
+#if IS_ENABLED(CONFIG_MPLS)
+       if (inet_del_protocol(&tunnelmpls4_protocol, IPPROTO_MPLS))
+               pr_err("tunnelmpls4 close: can't remove protocol\n");
+#endif
 #if IS_ENABLED(CONFIG_IPV6)
        if (inet_del_protocol(&tunnel64_protocol, IPPROTO_IPV6))
                pr_err("tunnel64 close: can't remove protocol\n");
index ca5e8ea..4aed8fc 100644 (file)
@@ -1583,6 +1583,8 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 
        if (sk_filter(sk, skb))
                goto drop;
+       if (unlikely(skb->len < sizeof(struct udphdr)))
+               goto drop;
 
        udp_csum_pull_header(skb);
        if (sk_rcvqueues_full(sk, sk->sk_rcvbuf)) {
index a1f6b7b..24f1b08 100644 (file)
@@ -547,7 +547,7 @@ void inet6_netconf_notify_devconf(struct net *net, int type, int ifindex,
        struct sk_buff *skb;
        int err = -ENOBUFS;
 
-       skb = nlmsg_new(inet6_netconf_msgsize_devconf(type), GFP_ATOMIC);
+       skb = nlmsg_new(inet6_netconf_msgsize_devconf(type), GFP_KERNEL);
        if (!skb)
                goto errout;
 
@@ -559,7 +559,7 @@ void inet6_netconf_notify_devconf(struct net *net, int type, int ifindex,
                kfree_skb(skb);
                goto errout;
        }
-       rtnl_notify(skb, net, 0, RTNLGRP_IPV6_NETCONF, NULL, GFP_ATOMIC);
+       rtnl_notify(skb, net, 0, RTNLGRP_IPV6_NETCONF, NULL, GFP_KERNEL);
        return;
 errout:
        rtnl_set_sk_err(net, RTNLGRP_IPV6_NETCONF, err);
index 487ef3b..7adce13 100644 (file)
@@ -1500,6 +1500,7 @@ static int ip6mr_mfc_add(struct net *net, struct mr6_table *mrt,
        c->mf6c_origin = mfc->mf6cc_origin.sin6_addr;
        c->mf6c_mcastgrp = mfc->mf6cc_mcastgrp.sin6_addr;
        c->mf6c_parent = mfc->mf6cc_parent;
+       c->mfc_un.res.lastuse = jiffies;
        ip6mr_update_thresholds(mrt, c, ttls);
        if (!mrtsock)
                c->mfc_flags |= MFC_STATIC;
@@ -1592,14 +1593,15 @@ static int ip6mr_sk_init(struct mr6_table *mrt, struct sock *sk)
        if (likely(mrt->mroute6_sk == NULL)) {
                mrt->mroute6_sk = sk;
                net->ipv6.devconf_all->mc_forwarding++;
-               inet6_netconf_notify_devconf(net, NETCONFA_MC_FORWARDING,
-                                            NETCONFA_IFINDEX_ALL,
-                                            net->ipv6.devconf_all);
-       }
-       else
+       } else {
                err = -EADDRINUSE;
+       }
        write_unlock_bh(&mrt_lock);
 
+       if (!err)
+               inet6_netconf_notify_devconf(net, NETCONFA_MC_FORWARDING,
+                                            NETCONFA_IFINDEX_ALL,
+                                            net->ipv6.devconf_all);
        rtnl_unlock();
 
        return err;
@@ -1617,11 +1619,11 @@ int ip6mr_sk_done(struct sock *sk)
                        write_lock_bh(&mrt_lock);
                        mrt->mroute6_sk = NULL;
                        net->ipv6.devconf_all->mc_forwarding--;
+                       write_unlock_bh(&mrt_lock);
                        inet6_netconf_notify_devconf(net,
                                                     NETCONFA_MC_FORWARDING,
                                                     NETCONFA_IFINDEX_ALL,
                                                     net->ipv6.devconf_all);
-                       write_unlock_bh(&mrt_lock);
 
                        mroute_clean_tables(mrt, false);
                        err = 0;
@@ -2091,6 +2093,7 @@ static void ip6_mr_forward(struct net *net, struct mr6_table *mrt,
        vif = cache->mf6c_parent;
        cache->mfc_un.res.pkt++;
        cache->mfc_un.res.bytes += skb->len;
+       cache->mfc_un.res.lastuse = jiffies;
 
        if (ipv6_addr_any(&cache->mf6c_origin) && true_vifi >= 0) {
                struct mfc6_cache *cache_proxy;
@@ -2233,10 +2236,10 @@ int ip6_mr_input(struct sk_buff *skb)
 static int __ip6mr_fill_mroute(struct mr6_table *mrt, struct sk_buff *skb,
                               struct mfc6_cache *c, struct rtmsg *rtm)
 {
-       int ct;
-       struct rtnexthop *nhp;
-       struct nlattr *mp_attr;
        struct rta_mfc_stats mfcs;
+       struct nlattr *mp_attr;
+       struct rtnexthop *nhp;
+       int ct;
 
        /* If cache is unresolved, don't try to parse IIF and OIF */
        if (c->mf6c_parent >= MAXMIFS)
@@ -2269,7 +2272,10 @@ static int __ip6mr_fill_mroute(struct mr6_table *mrt, struct sk_buff *skb,
        mfcs.mfcs_packets = c->mfc_un.res.pkt;
        mfcs.mfcs_bytes = c->mfc_un.res.bytes;
        mfcs.mfcs_wrong_if = c->mfc_un.res.wrong_if;
-       if (nla_put_64bit(skb, RTA_MFC_STATS, sizeof(mfcs), &mfcs, RTA_PAD) < 0)
+       if (nla_put_64bit(skb, RTA_MFC_STATS, sizeof(mfcs), &mfcs, RTA_PAD) ||
+           nla_put_u64_64bit(skb, RTA_EXPIRES,
+                             jiffies_to_clock_t(c->mfc_un.res.lastuse),
+                             RTA_PAD))
                return -EMSGSIZE;
 
        rtm->rtm_type = RTN_MULTICAST;
index 917a5cd..182b6a9 100644 (file)
@@ -688,12 +688,19 @@ out:
        return 0;
 }
 
-static const struct tnl_ptk_info tpi = {
+static const struct tnl_ptk_info ipip_tpi = {
        /* no tunnel info required for ipip. */
        .proto = htons(ETH_P_IP),
 };
 
-static int ipip_rcv(struct sk_buff *skb)
+#if IS_ENABLED(CONFIG_MPLS)
+static const struct tnl_ptk_info mplsip_tpi = {
+       /* no tunnel info required for mplsip. */
+       .proto = htons(ETH_P_MPLS_UC),
+};
+#endif
+
+static int sit_tunnel_rcv(struct sk_buff *skb, u8 ipproto)
 {
        const struct iphdr *iph;
        struct ip_tunnel *tunnel;
@@ -702,15 +709,23 @@ static int ipip_rcv(struct sk_buff *skb)
        tunnel = ipip6_tunnel_lookup(dev_net(skb->dev), skb->dev,
                                     iph->saddr, iph->daddr);
        if (tunnel) {
-               if (tunnel->parms.iph.protocol != IPPROTO_IPIP &&
+               const struct tnl_ptk_info *tpi;
+
+               if (tunnel->parms.iph.protocol != ipproto &&
                    tunnel->parms.iph.protocol != 0)
                        goto drop;
 
                if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
                        goto drop;
-               if (iptunnel_pull_header(skb, 0, tpi.proto, false))
+#if IS_ENABLED(CONFIG_MPLS)
+               if (ipproto == IPPROTO_MPLS)
+                       tpi = &mplsip_tpi;
+               else
+#endif
+                       tpi = &ipip_tpi;
+               if (iptunnel_pull_header(skb, 0, tpi->proto, false))
                        goto drop;
-               return ip_tunnel_rcv(tunnel, skb, &tpi, NULL, log_ecn_error);
+               return ip_tunnel_rcv(tunnel, skb, tpi, NULL, log_ecn_error);
        }
 
        return 1;
@@ -720,6 +735,18 @@ drop:
        return 0;
 }
 
+static int ipip_rcv(struct sk_buff *skb)
+{
+       return sit_tunnel_rcv(skb, IPPROTO_IPIP);
+}
+
+#if IS_ENABLED(CONFIG_MPLS)
+static int mplsip_rcv(struct sk_buff *skb)
+{
+       return sit_tunnel_rcv(skb, IPPROTO_MPLS);
+}
+#endif
+
 /*
  * If the IPv6 address comes from 6rd / 6to4 (RFC 3056) addr space this function
  * stores the embedded IPv4 address in v4dst and returns true.
@@ -958,7 +985,8 @@ tx_error:
        return NETDEV_TX_OK;
 }
 
-static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
+static netdev_tx_t sit_tunnel_xmit__(struct sk_buff *skb,
+                                    struct net_device *dev, u8 ipproto)
 {
        struct ip_tunnel *tunnel = netdev_priv(dev);
        const struct iphdr  *tiph = &tunnel->parms.iph;
@@ -966,9 +994,9 @@ static netdev_tx_t ipip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
        if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP4))
                goto tx_error;
 
-       skb_set_inner_ipproto(skb, IPPROTO_IPIP);
+       skb_set_inner_ipproto(skb, ipproto);
 
-       ip_tunnel_xmit(skb, dev, tiph, IPPROTO_IPIP);
+       ip_tunnel_xmit(skb, dev, tiph, ipproto);
        return NETDEV_TX_OK;
 tx_error:
        kfree_skb(skb);
@@ -981,11 +1009,16 @@ static netdev_tx_t sit_tunnel_xmit(struct sk_buff *skb,
 {
        switch (skb->protocol) {
        case htons(ETH_P_IP):
-               ipip_tunnel_xmit(skb, dev);
+               sit_tunnel_xmit__(skb, dev, IPPROTO_IPIP);
                break;
        case htons(ETH_P_IPV6):
                ipip6_tunnel_xmit(skb, dev);
                break;
+#if IS_ENABLED(CONFIG_MPLS)
+       case htons(ETH_P_MPLS_UC):
+               sit_tunnel_xmit__(skb, dev, IPPROTO_MPLS);
+               break;
+#endif
        default:
                goto tx_err;
        }
@@ -1093,6 +1126,16 @@ static int ipip6_tunnel_update_6rd(struct ip_tunnel *t,
 }
 #endif
 
+bool ipip6_valid_ip_proto(u8 ipproto)
+{
+       return ipproto == IPPROTO_IPV6 ||
+               ipproto == IPPROTO_IPIP ||
+#if IS_ENABLED(CONFIG_MPLS)
+               ipproto == IPPROTO_MPLS ||
+#endif
+               ipproto == 0;
+}
+
 static int
 ipip6_tunnel_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
 {
@@ -1152,9 +1195,7 @@ ipip6_tunnel_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
                        goto done;
 
                err = -EINVAL;
-               if (p.iph.protocol != IPPROTO_IPV6 &&
-                   p.iph.protocol != IPPROTO_IPIP &&
-                   p.iph.protocol != 0)
+               if (!ipip6_valid_ip_proto(p.iph.protocol))
                        goto done;
                if (p.iph.version != 4 ||
                    p.iph.ihl != 5 || (p.iph.frag_off&htons(~IP_DF)))
@@ -1379,9 +1420,7 @@ static int ipip6_validate(struct nlattr *tb[], struct nlattr *data[])
                return 0;
 
        proto = nla_get_u8(data[IFLA_IPTUN_PROTO]);
-       if (proto != IPPROTO_IPV6 &&
-           proto != IPPROTO_IPIP &&
-           proto != 0)
+       if (!ipip6_valid_ip_proto(proto))
                return -EINVAL;
 
        return 0;
@@ -1723,6 +1762,14 @@ static struct xfrm_tunnel ipip_handler __read_mostly = {
        .priority       =       2,
 };
 
+#if IS_ENABLED(CONFIG_MPLS)
+static struct xfrm_tunnel mplsip_handler __read_mostly = {
+       .handler        =       mplsip_rcv,
+       .err_handler    =       ipip6_err,
+       .priority       =       2,
+};
+#endif
+
 static void __net_exit sit_destroy_tunnels(struct net *net,
                                           struct list_head *head)
 {
@@ -1818,6 +1865,9 @@ static void __exit sit_cleanup(void)
        rtnl_link_unregister(&sit_link_ops);
        xfrm4_tunnel_deregister(&sit_handler, AF_INET6);
        xfrm4_tunnel_deregister(&ipip_handler, AF_INET);
+#if IS_ENABLED(CONFIG_MPLS)
+       xfrm4_tunnel_deregister(&mplsip_handler, AF_MPLS);
+#endif
 
        unregister_pernet_device(&sit_net_ops);
        rcu_barrier(); /* Wait for completion of call_rcu()'s */
@@ -1827,7 +1877,7 @@ static int __init sit_init(void)
 {
        int err;
 
-       pr_info("IPv6 over IPv4 tunneling driver\n");
+       pr_info("IPv6, IPv4 and MPLS over IPv4 tunneling driver\n");
 
        err = register_pernet_device(&sit_net_ops);
        if (err < 0)
@@ -1842,6 +1892,13 @@ static int __init sit_init(void)
                pr_info("%s: can't register ip4ip4\n", __func__);
                goto xfrm_tunnel4_failed;
        }
+#if IS_ENABLED(CONFIG_MPLS)
+       err = xfrm4_tunnel_register(&mplsip_handler, AF_MPLS);
+       if (err < 0) {
+               pr_info("%s: can't register mplsip\n", __func__);
+               goto xfrm_tunnel_mpls_failed;
+       }
+#endif
        err = rtnl_link_register(&sit_link_ops);
        if (err < 0)
                goto rtnl_link_failed;
@@ -1850,6 +1907,10 @@ out:
        return err;
 
 rtnl_link_failed:
+#if IS_ENABLED(CONFIG_MPLS)
+       xfrm4_tunnel_deregister(&mplsip_handler, AF_MPLS);
+xfrm_tunnel_mpls_failed:
+#endif
        xfrm4_tunnel_deregister(&ipip_handler, AF_INET);
 xfrm_tunnel4_failed:
        xfrm4_tunnel_deregister(&sit_handler, AF_INET6);
index 0a71a31..ad5292b 100644 (file)
@@ -620,6 +620,8 @@ int udpv6_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 
        if (sk_filter(sk, skb))
                goto drop;
+       if (unlikely(skb->len < sizeof(struct udphdr)))
+               goto drop;
 
        udp_csum_pull_header(skb);
        if (sk_rcvqueues_full(sk, sk->sk_rcvbuf)) {
index e9beaa5..5c161e7 100644 (file)
@@ -1009,10 +1009,12 @@ static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
        unsigned int flags;
 
        if (event == NETDEV_REGISTER) {
-               /* For now just support Ethernet and IPGRE devices */
+               /* For now just support Ethernet, IPGRE, SIT and IPIP devices */
                if (dev->type == ARPHRD_ETHER ||
                    dev->type == ARPHRD_LOOPBACK ||
-                   dev->type == ARPHRD_IPGRE) {
+                   dev->type == ARPHRD_IPGRE ||
+                   dev->type == ARPHRD_SIT ||
+                   dev->type == ARPHRD_TUNNEL) {
                        mdev = mpls_add_dev(dev);
                        if (IS_ERR(mdev))
                                return notifier_from_errno(PTR_ERR(mdev));
diff --git a/net/ncsi/Kconfig b/net/ncsi/Kconfig
new file mode 100644 (file)
index 0000000..08a8a60
--- /dev/null
@@ -0,0 +1,12 @@
+#
+# Configuration for NCSI support
+#
+
+config NET_NCSI
+       bool "NCSI interface support"
+       depends on INET
+       ---help---
+         This module provides NCSI (Network Controller Sideband Interface)
+         support. Enable this only if your system connects to a network
+         device via NCSI and the ethernet driver you're using supports
+         the protocol explicitly.
diff --git a/net/ncsi/Makefile b/net/ncsi/Makefile
new file mode 100644 (file)
index 0000000..dd12b56
--- /dev/null
@@ -0,0 +1,4 @@
+#
+# Makefile for NCSI API
+#
+obj-$(CONFIG_NET_NCSI) += ncsi-cmd.o ncsi-rsp.o ncsi-aen.o ncsi-manage.o
diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
new file mode 100644 (file)
index 0000000..33738c0
--- /dev/null
@@ -0,0 +1,328 @@
+/*
+ * Copyright Gavin Shan, IBM Corporation 2016.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __NCSI_INTERNAL_H__
+#define __NCSI_INTERNAL_H__
+
+enum {
+       NCSI_CAP_BASE           = 0,
+       NCSI_CAP_GENERIC        = 0,
+       NCSI_CAP_BC,
+       NCSI_CAP_MC,
+       NCSI_CAP_BUFFER,
+       NCSI_CAP_AEN,
+       NCSI_CAP_VLAN,
+       NCSI_CAP_MAX
+};
+
+enum {
+       NCSI_CAP_GENERIC_HWA             = 0x01, /* HW arbitration           */
+       NCSI_CAP_GENERIC_HDS             = 0x02, /* HNC driver status change */
+       NCSI_CAP_GENERIC_FC              = 0x04, /* HNC to MC flow control   */
+       NCSI_CAP_GENERIC_FC1             = 0x08, /* MC to HNC flow control   */
+       NCSI_CAP_GENERIC_MC              = 0x10, /* Global MC filtering      */
+       NCSI_CAP_GENERIC_HWA_UNKNOWN     = 0x00, /* Unknown HW arbitration   */
+       NCSI_CAP_GENERIC_HWA_SUPPORT     = 0x20, /* Supported HW arbitration */
+       NCSI_CAP_GENERIC_HWA_NOT_SUPPORT = 0x40, /* No HW arbitration        */
+       NCSI_CAP_GENERIC_HWA_RESERVED    = 0x60, /* Reserved HW arbitration  */
+       NCSI_CAP_GENERIC_HWA_MASK        = 0x60, /* Mask for HW arbitration  */
+       NCSI_CAP_GENERIC_MASK            = 0x7f,
+       NCSI_CAP_BC_ARP                  = 0x01, /* ARP packet filtering     */
+       NCSI_CAP_BC_DHCPC                = 0x02, /* DHCP client filtering    */
+       NCSI_CAP_BC_DHCPS                = 0x04, /* DHCP server filtering    */
+       NCSI_CAP_BC_NETBIOS              = 0x08, /* NetBIOS packet filtering */
+       NCSI_CAP_BC_MASK                 = 0x0f,
+       NCSI_CAP_MC_IPV6_NEIGHBOR        = 0x01, /* IPv6 neighbor filtering  */
+       NCSI_CAP_MC_IPV6_ROUTER          = 0x02, /* IPv6 router filering     */
+       NCSI_CAP_MC_DHCPV6_RELAY         = 0x04, /* DHCPv6 relay / server MC */
+       NCSI_CAP_MC_DHCPV6_WELL_KNOWN    = 0x08, /* DHCPv6 well-known MC     */
+       NCSI_CAP_MC_IPV6_MLD             = 0x10, /* IPv6 MLD filtering       */
+       NCSI_CAP_MC_IPV6_NEIGHBOR_S      = 0x20, /* IPv6 neighbour filtering */
+       NCSI_CAP_MC_MASK                 = 0x3f,
+       NCSI_CAP_AEN_LSC                 = 0x01, /* Link status change       */
+       NCSI_CAP_AEN_CR                  = 0x02, /* Configuration required   */
+       NCSI_CAP_AEN_HDS                 = 0x04, /* HNC driver status        */
+       NCSI_CAP_AEN_MASK                = 0x07,
+       NCSI_CAP_VLAN_ONLY               = 0x01, /* Filter VLAN packet only  */
+       NCSI_CAP_VLAN_NO                 = 0x02, /* Filter VLAN and non-VLAN */
+       NCSI_CAP_VLAN_ANY                = 0x04, /* Filter Any-and-non-VLAN  */
+       NCSI_CAP_VLAN_MASK               = 0x07
+};
+
+enum {
+       NCSI_MODE_BASE          = 0,
+       NCSI_MODE_ENABLE        = 0,
+       NCSI_MODE_TX_ENABLE,
+       NCSI_MODE_LINK,
+       NCSI_MODE_VLAN,
+       NCSI_MODE_BC,
+       NCSI_MODE_MC,
+       NCSI_MODE_AEN,
+       NCSI_MODE_FC,
+       NCSI_MODE_MAX
+};
+
+enum {
+       NCSI_FILTER_BASE        = 0,
+       NCSI_FILTER_VLAN        = 0,
+       NCSI_FILTER_UC,
+       NCSI_FILTER_MC,
+       NCSI_FILTER_MIXED,
+       NCSI_FILTER_MAX
+};
+
+struct ncsi_channel_version {
+       u32 version;            /* Supported BCD encoded NCSI version */
+       u32 alpha2;             /* Supported BCD encoded NCSI version */
+       u8  fw_name[12];        /* Firware name string                */
+       u32 fw_version;         /* Firmware version                   */
+       u16 pci_ids[4];         /* PCI identification                 */
+       u32 mf_id;              /* Manufacture ID                     */
+};
+
+struct ncsi_channel_cap {
+       u32 index;      /* Index of channel capabilities */
+       u32 cap;        /* NCSI channel capability       */
+};
+
+struct ncsi_channel_mode {
+       u32 index;      /* Index of channel modes      */
+       u32 enable;     /* Enabled or disabled         */
+       u32 size;       /* Valid entries in ncm_data[] */
+       u32 data[8];    /* Data entries                */
+};
+
+struct ncsi_channel_filter {
+       u32 index;      /* Index of channel filters          */
+       u32 total;      /* Total entries in the filter table */
+       u64 bitmap;     /* Bitmap of valid entries           */
+       u32 data[];     /* Data for the valid entries        */
+};
+
+struct ncsi_channel_stats {
+       u32 hnc_cnt_hi;         /* Counter cleared            */
+       u32 hnc_cnt_lo;         /* Counter cleared            */
+       u32 hnc_rx_bytes;       /* Rx bytes                   */
+       u32 hnc_tx_bytes;       /* Tx bytes                   */
+       u32 hnc_rx_uc_pkts;     /* Rx UC packets              */
+       u32 hnc_rx_mc_pkts;     /* Rx MC packets              */
+       u32 hnc_rx_bc_pkts;     /* Rx BC packets              */
+       u32 hnc_tx_uc_pkts;     /* Tx UC packets              */
+       u32 hnc_tx_mc_pkts;     /* Tx MC packets              */
+       u32 hnc_tx_bc_pkts;     /* Tx BC packets              */
+       u32 hnc_fcs_err;        /* FCS errors                 */
+       u32 hnc_align_err;      /* Alignment errors           */
+       u32 hnc_false_carrier;  /* False carrier detection    */
+       u32 hnc_runt_pkts;      /* Rx runt packets            */
+       u32 hnc_jabber_pkts;    /* Rx jabber packets          */
+       u32 hnc_rx_pause_xon;   /* Rx pause XON frames        */
+       u32 hnc_rx_pause_xoff;  /* Rx XOFF frames             */
+       u32 hnc_tx_pause_xon;   /* Tx XON frames              */
+       u32 hnc_tx_pause_xoff;  /* Tx XOFF frames             */
+       u32 hnc_tx_s_collision; /* Single collision frames    */
+       u32 hnc_tx_m_collision; /* Multiple collision frames  */
+       u32 hnc_l_collision;    /* Late collision frames      */
+       u32 hnc_e_collision;    /* Excessive collision frames */
+       u32 hnc_rx_ctl_frames;  /* Rx control frames          */
+       u32 hnc_rx_64_frames;   /* Rx 64-bytes frames         */
+       u32 hnc_rx_127_frames;  /* Rx 65-127 bytes frames     */
+       u32 hnc_rx_255_frames;  /* Rx 128-255 bytes frames    */
+       u32 hnc_rx_511_frames;  /* Rx 256-511 bytes frames    */
+       u32 hnc_rx_1023_frames; /* Rx 512-1023 bytes frames   */
+       u32 hnc_rx_1522_frames; /* Rx 1024-1522 bytes frames  */
+       u32 hnc_rx_9022_frames; /* Rx 1523-9022 bytes frames  */
+       u32 hnc_tx_64_frames;   /* Tx 64-bytes frames         */
+       u32 hnc_tx_127_frames;  /* Tx 65-127 bytes frames     */
+       u32 hnc_tx_255_frames;  /* Tx 128-255 bytes frames    */
+       u32 hnc_tx_511_frames;  /* Tx 256-511 bytes frames    */
+       u32 hnc_tx_1023_frames; /* Tx 512-1023 bytes frames   */
+       u32 hnc_tx_1522_frames; /* Tx 1024-1522 bytes frames  */
+       u32 hnc_tx_9022_frames; /* Tx 1523-9022 bytes frames  */
+       u32 hnc_rx_valid_bytes; /* Rx valid bytes             */
+       u32 hnc_rx_runt_pkts;   /* Rx error runt packets      */
+       u32 hnc_rx_jabber_pkts; /* Rx error jabber packets    */
+       u32 ncsi_rx_cmds;       /* Rx NCSI commands           */
+       u32 ncsi_dropped_cmds;  /* Dropped commands           */
+       u32 ncsi_cmd_type_errs; /* Command type errors        */
+       u32 ncsi_cmd_csum_errs; /* Command checksum errors    */
+       u32 ncsi_rx_pkts;       /* Rx NCSI packets            */
+       u32 ncsi_tx_pkts;       /* Tx NCSI packets            */
+       u32 ncsi_tx_aen_pkts;   /* Tx AEN packets             */
+       u32 pt_tx_pkts;         /* Tx packets                 */
+       u32 pt_tx_dropped;      /* Tx dropped packets         */
+       u32 pt_tx_channel_err;  /* Tx channel errors          */
+       u32 pt_tx_us_err;       /* Tx undersize errors        */
+       u32 pt_rx_pkts;         /* Rx packets                 */
+       u32 pt_rx_dropped;      /* Rx dropped packets         */
+       u32 pt_rx_channel_err;  /* Rx channel errors          */
+       u32 pt_rx_us_err;       /* Rx undersize errors        */
+       u32 pt_rx_os_err;       /* Rx oversize errors         */
+};
+
+struct ncsi_dev_priv;
+struct ncsi_package;
+
+#define NCSI_PACKAGE_SHIFT     5
+#define NCSI_PACKAGE_INDEX(c)  (((c) >> NCSI_PACKAGE_SHIFT) & 0x7)
+#define NCSI_CHANNEL_INDEX(c)  ((c) & ((1 << NCSI_PACKAGE_SHIFT) - 1))
+#define NCSI_TO_CHANNEL(p, c)  (((p) << NCSI_PACKAGE_SHIFT) | (c))
+
+struct ncsi_channel {
+       unsigned char               id;
+       int                         state;
+#define NCSI_CHANNEL_INACTIVE          1
+#define NCSI_CHANNEL_ACTIVE            2
+#define NCSI_CHANNEL_INVISIBLE         3
+       spinlock_t                  lock;       /* Protect filters etc */
+       struct ncsi_package         *package;
+       struct ncsi_channel_version version;
+       struct ncsi_channel_cap     caps[NCSI_CAP_MAX];
+       struct ncsi_channel_mode    modes[NCSI_MODE_MAX];
+       struct ncsi_channel_filter  *filters[NCSI_FILTER_MAX];
+       struct ncsi_channel_stats   stats;
+       struct timer_list           timer;      /* Link monitor timer  */
+       bool                        enabled;    /* Timer is enabled    */
+       unsigned int                timeout;    /* Times of timeout    */
+       struct list_head            node;
+       struct list_head            link;
+};
+
+struct ncsi_package {
+       unsigned char        id;          /* NCSI 3-bits package ID */
+       unsigned char        uuid[16];    /* UUID                   */
+       struct ncsi_dev_priv *ndp;        /* NCSI device            */
+       spinlock_t           lock;        /* Protect the package    */
+       unsigned int         channel_num; /* Number of channels     */
+       struct list_head     channels;    /* List of chanels        */
+       struct list_head     node;        /* Form list of packages  */
+};
+
+struct ncsi_request {
+       unsigned char        id;      /* Request ID - 0 to 255           */
+       bool                 used;    /* Request that has been assigned  */
+       bool                 driven;  /* Drive state machine             */
+       struct ncsi_dev_priv *ndp;    /* Associated NCSI device          */
+       struct sk_buff       *cmd;    /* Associated NCSI command packet  */
+       struct sk_buff       *rsp;    /* Associated NCSI response packet */
+       struct timer_list    timer;   /* Timer on waiting for response   */
+       bool                 enabled; /* Time has been enabled or not    */
+};
+
+enum {
+       ncsi_dev_state_major            = 0xff00,
+       ncsi_dev_state_minor            = 0x00ff,
+       ncsi_dev_state_probe_deselect   = 0x0201,
+       ncsi_dev_state_probe_package,
+       ncsi_dev_state_probe_channel,
+       ncsi_dev_state_probe_cis,
+       ncsi_dev_state_probe_gvi,
+       ncsi_dev_state_probe_gc,
+       ncsi_dev_state_probe_gls,
+       ncsi_dev_state_probe_dp,
+       ncsi_dev_state_config_sp        = 0x0301,
+       ncsi_dev_state_config_cis,
+       ncsi_dev_state_config_sma,
+       ncsi_dev_state_config_ebf,
+#if IS_ENABLED(CONFIG_IPV6)
+       ncsi_dev_state_config_egmf,
+#endif
+       ncsi_dev_state_config_ecnt,
+       ncsi_dev_state_config_ec,
+       ncsi_dev_state_config_ae,
+       ncsi_dev_state_config_gls,
+       ncsi_dev_state_config_done,
+       ncsi_dev_state_suspend_select   = 0x0401,
+       ncsi_dev_state_suspend_dcnt,
+       ncsi_dev_state_suspend_dc,
+       ncsi_dev_state_suspend_deselect,
+       ncsi_dev_state_suspend_done
+};
+
+struct ncsi_dev_priv {
+       struct ncsi_dev     ndev;            /* Associated NCSI device     */
+       unsigned int        flags;           /* NCSI device flags          */
+#define NCSI_DEV_PROBED                1            /* Finalized NCSI topology    */
+#define NCSI_DEV_HWA           2            /* Enabled HW arbitration     */
+#define NCSI_DEV_RESHUFFLE     4
+       spinlock_t          lock;            /* Protect the NCSI device    */
+#if IS_ENABLED(CONFIG_IPV6)
+       unsigned int        inet6_addr_num;  /* Number of IPv6 addresses   */
+#endif
+       unsigned int        package_num;     /* Number of packages         */
+       struct list_head    packages;        /* List of packages           */
+       struct ncsi_request requests[256];   /* Request table              */
+       unsigned int        request_id;      /* Last used request ID       */
+       unsigned int        pending_req_num; /* Number of pending requests */
+       struct ncsi_package *active_package; /* Currently handled package  */
+       struct ncsi_channel *active_channel; /* Currently handled channel  */
+       struct list_head    channel_queue;   /* Config queue of channels   */
+       struct work_struct  work;            /* For channel management     */
+       struct packet_type  ptype;           /* NCSI packet Rx handler     */
+       struct list_head    node;            /* Form NCSI device list      */
+};
+
+struct ncsi_cmd_arg {
+       struct ncsi_dev_priv *ndp;        /* Associated NCSI device        */
+       unsigned char        type;        /* Command in the NCSI packet    */
+       unsigned char        id;          /* Request ID (sequence number)  */
+       unsigned char        package;     /* Destination package ID        */
+       unsigned char        channel;     /* Detination channel ID or 0x1f */
+       unsigned short       payload;     /* Command packet payload length */
+       bool                 driven;      /* Drive the state machine?      */
+       union {
+               unsigned char  bytes[16]; /* Command packet specific data  */
+               unsigned short words[8];
+               unsigned int   dwords[4];
+       };
+};
+
+extern struct list_head ncsi_dev_list;
+extern spinlock_t ncsi_dev_lock;
+
+#define TO_NCSI_DEV_PRIV(nd) \
+       container_of(nd, struct ncsi_dev_priv, ndev)
+#define NCSI_FOR_EACH_DEV(ndp) \
+       list_for_each_entry_rcu(ndp, &ncsi_dev_list, node)
+#define NCSI_FOR_EACH_PACKAGE(ndp, np) \
+       list_for_each_entry_rcu(np, &ndp->packages, node)
+#define NCSI_FOR_EACH_CHANNEL(np, nc) \
+       list_for_each_entry_rcu(nc, &np->channels, node)
+
+/* Resources */
+int ncsi_find_filter(struct ncsi_channel *nc, int table, void *data);
+int ncsi_add_filter(struct ncsi_channel *nc, int table, void *data);
+int ncsi_remove_filter(struct ncsi_channel *nc, int table, int index);
+void ncsi_start_channel_monitor(struct ncsi_channel *nc);
+void ncsi_stop_channel_monitor(struct ncsi_channel *nc);
+struct ncsi_channel *ncsi_find_channel(struct ncsi_package *np,
+                                      unsigned char id);
+struct ncsi_channel *ncsi_add_channel(struct ncsi_package *np,
+                                     unsigned char id);
+struct ncsi_package *ncsi_find_package(struct ncsi_dev_priv *ndp,
+                                      unsigned char id);
+struct ncsi_package *ncsi_add_package(struct ncsi_dev_priv *ndp,
+                                     unsigned char id);
+void ncsi_remove_package(struct ncsi_package *np);
+void ncsi_find_package_and_channel(struct ncsi_dev_priv *ndp,
+                                  unsigned char id,
+                                  struct ncsi_package **np,
+                                  struct ncsi_channel **nc);
+struct ncsi_request *ncsi_alloc_request(struct ncsi_dev_priv *ndp, bool driven);
+void ncsi_free_request(struct ncsi_request *nr);
+struct ncsi_dev *ncsi_find_dev(struct net_device *dev);
+int ncsi_process_next_channel(struct ncsi_dev_priv *ndp);
+
+/* Packet handlers */
+u32 ncsi_calculate_checksum(unsigned char *data, int len);
+int ncsi_xmit_cmd(struct ncsi_cmd_arg *nca);
+int ncsi_rcv_rsp(struct sk_buff *skb, struct net_device *dev,
+                struct packet_type *pt, struct net_device *orig_dev);
+int ncsi_aen_handler(struct ncsi_dev_priv *ndp, struct sk_buff *skb);
+
+#endif /* __NCSI_INTERNAL_H__ */
diff --git a/net/ncsi/ncsi-aen.c b/net/ncsi/ncsi-aen.c
new file mode 100644 (file)
index 0000000..d463468
--- /dev/null
@@ -0,0 +1,193 @@
+/*
+ * Copyright Gavin Shan, IBM Corporation 2016.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+
+#include <net/ncsi.h>
+#include <net/net_namespace.h>
+#include <net/sock.h>
+
+#include "internal.h"
+#include "ncsi-pkt.h"
+
+static int ncsi_validate_aen_pkt(struct ncsi_aen_pkt_hdr *h,
+                                const unsigned short payload)
+{
+       u32 checksum;
+       __be32 *pchecksum;
+
+       if (h->common.revision != NCSI_PKT_REVISION)
+               return -EINVAL;
+       if (ntohs(h->common.length) != payload)
+               return -EINVAL;
+
+       /* Validate checksum, which might be zeroes if the
+        * sender doesn't support checksum according to NCSI
+        * specification.
+        */
+       pchecksum = (__be32 *)((void *)(h + 1) + payload - 4);
+       if (ntohl(*pchecksum) == 0)
+               return 0;
+
+       checksum = ncsi_calculate_checksum((unsigned char *)h,
+                                          sizeof(*h) + payload - 4);
+       if (*pchecksum != htonl(checksum))
+               return -EINVAL;
+
+       return 0;
+}
+
+static int ncsi_aen_handler_lsc(struct ncsi_dev_priv *ndp,
+                               struct ncsi_aen_pkt_hdr *h)
+{
+       struct ncsi_aen_lsc_pkt *lsc;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+       unsigned long old_data;
+       unsigned long flags;
+
+       /* Find the NCSI channel */
+       ncsi_find_package_and_channel(ndp, h->common.channel, NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Update the link status */
+       ncm = &nc->modes[NCSI_MODE_LINK];
+       lsc = (struct ncsi_aen_lsc_pkt *)h;
+       old_data = ncm->data[2];
+       ncm->data[2] = ntohl(lsc->status);
+       ncm->data[4] = ntohl(lsc->oem_status);
+       if (!((old_data ^ ncm->data[2]) & 0x1) ||
+           !list_empty(&nc->link))
+               return 0;
+       if (!(nc->state == NCSI_CHANNEL_INACTIVE && (ncm->data[2] & 0x1)) &&
+           !(nc->state == NCSI_CHANNEL_ACTIVE && !(ncm->data[2] & 0x1)))
+               return 0;
+
+       if (!(ndp->flags & NCSI_DEV_HWA) &&
+           nc->state == NCSI_CHANNEL_ACTIVE)
+               ndp->flags |= NCSI_DEV_RESHUFFLE;
+
+       ncsi_stop_channel_monitor(nc);
+       spin_lock_irqsave(&ndp->lock, flags);
+       list_add_tail_rcu(&nc->link, &ndp->channel_queue);
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       return ncsi_process_next_channel(ndp);
+}
+
+static int ncsi_aen_handler_cr(struct ncsi_dev_priv *ndp,
+                              struct ncsi_aen_pkt_hdr *h)
+{
+       struct ncsi_channel *nc;
+       unsigned long flags;
+
+       /* Find the NCSI channel */
+       ncsi_find_package_and_channel(ndp, h->common.channel, NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       if (!list_empty(&nc->link) ||
+           nc->state != NCSI_CHANNEL_ACTIVE)
+               return 0;
+
+       ncsi_stop_channel_monitor(nc);
+       spin_lock_irqsave(&ndp->lock, flags);
+       xchg(&nc->state, NCSI_CHANNEL_INACTIVE);
+       list_add_tail_rcu(&nc->link, &ndp->channel_queue);
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       return ncsi_process_next_channel(ndp);
+}
+
+static int ncsi_aen_handler_hncdsc(struct ncsi_dev_priv *ndp,
+                                  struct ncsi_aen_pkt_hdr *h)
+{
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+       struct ncsi_aen_hncdsc_pkt *hncdsc;
+       unsigned long flags;
+
+       /* Find the NCSI channel */
+       ncsi_find_package_and_channel(ndp, h->common.channel, NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* If the channel is active one, we need reconfigure it */
+       ncm = &nc->modes[NCSI_MODE_LINK];
+       hncdsc = (struct ncsi_aen_hncdsc_pkt *)h;
+       ncm->data[3] = ntohl(hncdsc->status);
+       if (!list_empty(&nc->link) ||
+           nc->state != NCSI_CHANNEL_ACTIVE ||
+           (ncm->data[3] & 0x1))
+               return 0;
+
+       if (ndp->flags & NCSI_DEV_HWA)
+               ndp->flags |= NCSI_DEV_RESHUFFLE;
+
+       /* If this channel is the active one and the link doesn't
+        * work, we have to choose another channel to be active one.
+        * The logic here is exactly similar to what we do when link
+        * is down on the active channel.
+        */
+       ncsi_stop_channel_monitor(nc);
+       spin_lock_irqsave(&ndp->lock, flags);
+       list_add_tail_rcu(&nc->link, &ndp->channel_queue);
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       ncsi_process_next_channel(ndp);
+
+       return 0;
+}
+
+static struct ncsi_aen_handler {
+       unsigned char type;
+       int           payload;
+       int           (*handler)(struct ncsi_dev_priv *ndp,
+                                struct ncsi_aen_pkt_hdr *h);
+} ncsi_aen_handlers[] = {
+       { NCSI_PKT_AEN_LSC,    12, ncsi_aen_handler_lsc    },
+       { NCSI_PKT_AEN_CR,      4, ncsi_aen_handler_cr     },
+       { NCSI_PKT_AEN_HNCDSC,  4, ncsi_aen_handler_hncdsc }
+};
+
+int ncsi_aen_handler(struct ncsi_dev_priv *ndp, struct sk_buff *skb)
+{
+       struct ncsi_aen_pkt_hdr *h;
+       struct ncsi_aen_handler *nah = NULL;
+       int i, ret;
+
+       /* Find the handler */
+       h = (struct ncsi_aen_pkt_hdr *)skb_network_header(skb);
+       for (i = 0; i < ARRAY_SIZE(ncsi_aen_handlers); i++) {
+               if (ncsi_aen_handlers[i].type == h->type) {
+                       nah = &ncsi_aen_handlers[i];
+                       break;
+               }
+       }
+
+       if (!nah) {
+               netdev_warn(ndp->ndev.dev, "Invalid AEN (0x%x) received\n",
+                           h->type);
+               return -ENOENT;
+       }
+
+       ret = ncsi_validate_aen_pkt(h, nah->payload);
+       if (ret)
+               goto out;
+
+       ret = nah->handler(ndp, h);
+out:
+       consume_skb(skb);
+       return ret;
+}
diff --git a/net/ncsi/ncsi-cmd.c b/net/ncsi/ncsi-cmd.c
new file mode 100644 (file)
index 0000000..21057a8
--- /dev/null
@@ -0,0 +1,367 @@
+/*
+ * Copyright Gavin Shan, IBM Corporation 2016.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/etherdevice.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+
+#include <net/ncsi.h>
+#include <net/net_namespace.h>
+#include <net/sock.h>
+
+#include "internal.h"
+#include "ncsi-pkt.h"
+
+u32 ncsi_calculate_checksum(unsigned char *data, int len)
+{
+       u32 checksum = 0;
+       int i;
+
+       for (i = 0; i < len; i += 2)
+               checksum += (((u32)data[i] << 8) | data[i + 1]);
+
+       checksum = (~checksum + 1);
+       return checksum;
+}
+
+/* This function should be called after the data area has been
+ * populated completely.
+ */
+static void ncsi_cmd_build_header(struct ncsi_pkt_hdr *h,
+                                 struct ncsi_cmd_arg *nca)
+{
+       u32 checksum;
+       __be32 *pchecksum;
+
+       h->mc_id        = 0;
+       h->revision     = NCSI_PKT_REVISION;
+       h->reserved     = 0;
+       h->id           = nca->id;
+       h->type         = nca->type;
+       h->channel      = NCSI_TO_CHANNEL(nca->package,
+                                         nca->channel);
+       h->length       = htons(nca->payload);
+       h->reserved1[0] = 0;
+       h->reserved1[1] = 0;
+
+       /* Fill with calculated checksum */
+       checksum = ncsi_calculate_checksum((unsigned char *)h,
+                                          sizeof(*h) + nca->payload);
+       pchecksum = (__be32 *)((void *)h + sizeof(struct ncsi_pkt_hdr) +
+                   nca->payload);
+       *pchecksum = htonl(checksum);
+}
+
+static int ncsi_cmd_handler_default(struct sk_buff *skb,
+                                   struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_sp(struct sk_buff *skb,
+                              struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_sp_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_sp_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       cmd->hw_arbitration = nca->bytes[0];
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_dc(struct sk_buff *skb,
+                              struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_dc_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_dc_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       cmd->ald = nca->bytes[0];
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_rc(struct sk_buff *skb,
+                              struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_rc_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_rc_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_ae(struct sk_buff *skb,
+                              struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_ae_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_ae_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       cmd->mc_id = nca->bytes[0];
+       cmd->mode = htonl(nca->dwords[1]);
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_sl(struct sk_buff *skb,
+                              struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_sl_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_sl_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       cmd->mode = htonl(nca->dwords[0]);
+       cmd->oem_mode = htonl(nca->dwords[1]);
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_svf(struct sk_buff *skb,
+                               struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_svf_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_svf_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       cmd->vlan = htons(nca->words[0]);
+       cmd->index = nca->bytes[2];
+       cmd->enable = nca->bytes[3];
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_ev(struct sk_buff *skb,
+                              struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_ev_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_ev_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       cmd->mode = nca->bytes[0];
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_sma(struct sk_buff *skb,
+                               struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_sma_pkt *cmd;
+       int i;
+
+       cmd = (struct ncsi_cmd_sma_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       for (i = 0; i < 6; i++)
+               cmd->mac[i] = nca->bytes[i];
+       cmd->index = nca->bytes[6];
+       cmd->at_e = nca->bytes[7];
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_ebf(struct sk_buff *skb,
+                               struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_ebf_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_ebf_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       cmd->mode = htonl(nca->dwords[0]);
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_egmf(struct sk_buff *skb,
+                                struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_egmf_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_egmf_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       cmd->mode = htonl(nca->dwords[0]);
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static int ncsi_cmd_handler_snfc(struct sk_buff *skb,
+                                struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_cmd_snfc_pkt *cmd;
+
+       cmd = (struct ncsi_cmd_snfc_pkt *)skb_put(skb, sizeof(*cmd));
+       memset(cmd, 0, sizeof(*cmd));
+       cmd->mode = nca->bytes[0];
+       ncsi_cmd_build_header(&cmd->cmd.common, nca);
+
+       return 0;
+}
+
+static struct ncsi_cmd_handler {
+       unsigned char type;
+       int           payload;
+       int           (*handler)(struct sk_buff *skb,
+                                struct ncsi_cmd_arg *nca);
+} ncsi_cmd_handlers[] = {
+       { NCSI_PKT_CMD_CIS,    0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_SP,     4, ncsi_cmd_handler_sp      },
+       { NCSI_PKT_CMD_DP,     0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_EC,     0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_DC,     4, ncsi_cmd_handler_dc      },
+       { NCSI_PKT_CMD_RC,     4, ncsi_cmd_handler_rc      },
+       { NCSI_PKT_CMD_ECNT,   0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_DCNT,   0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_AE,     8, ncsi_cmd_handler_ae      },
+       { NCSI_PKT_CMD_SL,     8, ncsi_cmd_handler_sl      },
+       { NCSI_PKT_CMD_GLS,    0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_SVF,    4, ncsi_cmd_handler_svf     },
+       { NCSI_PKT_CMD_EV,     4, ncsi_cmd_handler_ev      },
+       { NCSI_PKT_CMD_DV,     0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_SMA,    8, ncsi_cmd_handler_sma     },
+       { NCSI_PKT_CMD_EBF,    4, ncsi_cmd_handler_ebf     },
+       { NCSI_PKT_CMD_DBF,    0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_EGMF,   4, ncsi_cmd_handler_egmf    },
+       { NCSI_PKT_CMD_DGMF,   0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_SNFC,   4, ncsi_cmd_handler_snfc    },
+       { NCSI_PKT_CMD_GVI,    0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_GC,     0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_GP,     0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_GCPS,   0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_GNS,    0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_GNPTS,  0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_GPS,    0, ncsi_cmd_handler_default },
+       { NCSI_PKT_CMD_OEM,    0, NULL                     },
+       { NCSI_PKT_CMD_PLDM,   0, NULL                     },
+       { NCSI_PKT_CMD_GPUUID, 0, ncsi_cmd_handler_default }
+};
+
+static struct ncsi_request *ncsi_alloc_command(struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_dev_priv *ndp = nca->ndp;
+       struct ncsi_dev *nd = &ndp->ndev;
+       struct net_device *dev = nd->dev;
+       int hlen = LL_RESERVED_SPACE(dev);
+       int tlen = dev->needed_tailroom;
+       int len = hlen + tlen;
+       struct sk_buff *skb;
+       struct ncsi_request *nr;
+
+       nr = ncsi_alloc_request(ndp, nca->driven);
+       if (!nr)
+               return NULL;
+
+       /* NCSI command packet has 16-bytes header, payload, 4 bytes checksum.
+        * The packet needs padding if its payload is less than 26 bytes to
+        * meet 64 bytes minimal ethernet frame length.
+        */
+       len += sizeof(struct ncsi_cmd_pkt_hdr) + 4;
+       if (nca->payload < 26)
+               len += 26;
+       else
+               len += nca->payload;
+
+       /* Allocate skb */
+       skb = alloc_skb(len, GFP_ATOMIC);
+       if (!skb) {
+               ncsi_free_request(nr);
+               return NULL;
+       }
+
+       nr->cmd = skb;
+       skb_reserve(skb, hlen);
+       skb_reset_network_header(skb);
+
+       skb->dev = dev;
+       skb->protocol = htons(ETH_P_NCSI);
+
+       return nr;
+}
+
+int ncsi_xmit_cmd(struct ncsi_cmd_arg *nca)
+{
+       struct ncsi_request *nr;
+       struct ethhdr *eh;
+       struct ncsi_cmd_handler *nch = NULL;
+       int i, ret;
+
+       /* Search for the handler */
+       for (i = 0; i < ARRAY_SIZE(ncsi_cmd_handlers); i++) {
+               if (ncsi_cmd_handlers[i].type == nca->type) {
+                       if (ncsi_cmd_handlers[i].handler)
+                               nch = &ncsi_cmd_handlers[i];
+                       else
+                               nch = NULL;
+
+                       break;
+               }
+       }
+
+       if (!nch) {
+               netdev_err(nca->ndp->ndev.dev,
+                          "Cannot send packet with type 0x%02x\n", nca->type);
+               return -ENOENT;
+       }
+
+       /* Get packet payload length and allocate the request */
+       nca->payload = nch->payload;
+       nr = ncsi_alloc_command(nca);
+       if (!nr)
+               return -ENOMEM;
+
+       /* Prepare the packet */
+       nca->id = nr->id;
+       ret = nch->handler(nr->cmd, nca);
+       if (ret) {
+               ncsi_free_request(nr);
+               return ret;
+       }
+
+       /* Fill the ethernet header */
+       eh = (struct ethhdr *)skb_push(nr->cmd, sizeof(*eh));
+       eh->h_proto = htons(ETH_P_NCSI);
+       eth_broadcast_addr(eh->h_dest);
+       eth_broadcast_addr(eh->h_source);
+
+       /* Start the timer for the request that might not have
+        * corresponding response. Given NCSI is an internal
+        * connection a 1 second delay should be sufficient.
+        */
+       nr->enabled = true;
+       mod_timer(&nr->timer, jiffies + 1 * HZ);
+
+       /* Send NCSI packet */
+       skb_get(nr->cmd);
+       ret = dev_queue_xmit(nr->cmd);
+       if (ret < 0) {
+               ncsi_free_request(nr);
+               return ret;
+       }
+
+       return 0;
+}
diff --git a/net/ncsi/ncsi-manage.c b/net/ncsi/ncsi-manage.c
new file mode 100644 (file)
index 0000000..d627a39
--- /dev/null
@@ -0,0 +1,1199 @@
+/*
+ * Copyright Gavin Shan, IBM Corporation 2016.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/netlink.h>
+
+#include <net/ncsi.h>
+#include <net/net_namespace.h>
+#include <net/sock.h>
+#include <net/addrconf.h>
+#include <net/ipv6.h>
+#include <net/if_inet6.h>
+
+#include "internal.h"
+#include "ncsi-pkt.h"
+
+LIST_HEAD(ncsi_dev_list);
+DEFINE_SPINLOCK(ncsi_dev_lock);
+
+static inline int ncsi_filter_size(int table)
+{
+       int sizes[] = { 2, 6, 6, 6 };
+
+       BUILD_BUG_ON(ARRAY_SIZE(sizes) != NCSI_FILTER_MAX);
+       if (table < NCSI_FILTER_BASE || table >= NCSI_FILTER_MAX)
+               return -EINVAL;
+
+       return sizes[table];
+}
+
+int ncsi_find_filter(struct ncsi_channel *nc, int table, void *data)
+{
+       struct ncsi_channel_filter *ncf;
+       void *bitmap;
+       int index, size;
+       unsigned long flags;
+
+       ncf = nc->filters[table];
+       if (!ncf)
+               return -ENXIO;
+
+       size = ncsi_filter_size(table);
+       if (size < 0)
+               return size;
+
+       spin_lock_irqsave(&nc->lock, flags);
+       bitmap = (void *)&ncf->bitmap;
+       index = -1;
+       while ((index = find_next_bit(bitmap, ncf->total, index + 1))
+              < ncf->total) {
+               if (!memcmp(ncf->data + size * index, data, size)) {
+                       spin_unlock_irqrestore(&nc->lock, flags);
+                       return index;
+               }
+       }
+       spin_unlock_irqrestore(&nc->lock, flags);
+
+       return -ENOENT;
+}
+
+int ncsi_add_filter(struct ncsi_channel *nc, int table, void *data)
+{
+       struct ncsi_channel_filter *ncf;
+       int index, size;
+       void *bitmap;
+       unsigned long flags;
+
+       size = ncsi_filter_size(table);
+       if (size < 0)
+               return size;
+
+       index = ncsi_find_filter(nc, table, data);
+       if (index >= 0)
+               return index;
+
+       ncf = nc->filters[table];
+       if (!ncf)
+               return -ENODEV;
+
+       spin_lock_irqsave(&nc->lock, flags);
+       bitmap = (void *)&ncf->bitmap;
+       do {
+               index = find_next_zero_bit(bitmap, ncf->total, 0);
+               if (index >= ncf->total) {
+                       spin_unlock_irqrestore(&nc->lock, flags);
+                       return -ENOSPC;
+               }
+       } while (test_and_set_bit(index, bitmap));
+
+       memcpy(ncf->data + size * index, data, size);
+       spin_unlock_irqrestore(&nc->lock, flags);
+
+       return index;
+}
+
+int ncsi_remove_filter(struct ncsi_channel *nc, int table, int index)
+{
+       struct ncsi_channel_filter *ncf;
+       int size;
+       void *bitmap;
+       unsigned long flags;
+
+       size = ncsi_filter_size(table);
+       if (size < 0)
+               return size;
+
+       ncf = nc->filters[table];
+       if (!ncf || index >= ncf->total)
+               return -ENODEV;
+
+       spin_lock_irqsave(&nc->lock, flags);
+       bitmap = (void *)&ncf->bitmap;
+       if (test_and_clear_bit(index, bitmap))
+               memset(ncf->data + size * index, 0, size);
+       spin_unlock_irqrestore(&nc->lock, flags);
+
+       return 0;
+}
+
+static void ncsi_report_link(struct ncsi_dev_priv *ndp, bool force_down)
+{
+       struct ncsi_dev *nd = &ndp->ndev;
+       struct ncsi_package *np;
+       struct ncsi_channel *nc;
+
+       nd->state = ncsi_dev_state_functional;
+       if (force_down) {
+               nd->link_up = 0;
+               goto report;
+       }
+
+       nd->link_up = 0;
+       NCSI_FOR_EACH_PACKAGE(ndp, np) {
+               NCSI_FOR_EACH_CHANNEL(np, nc) {
+                       if (!list_empty(&nc->link) ||
+                           nc->state != NCSI_CHANNEL_ACTIVE)
+                               continue;
+
+                       if (nc->modes[NCSI_MODE_LINK].data[2] & 0x1) {
+                               nd->link_up = 1;
+                               goto report;
+                       }
+               }
+       }
+
+report:
+       nd->handler(nd);
+}
+
+static void ncsi_channel_monitor(unsigned long data)
+{
+       struct ncsi_channel *nc = (struct ncsi_channel *)data;
+       struct ncsi_package *np = nc->package;
+       struct ncsi_dev_priv *ndp = np->ndp;
+       struct ncsi_cmd_arg nca;
+       bool enabled;
+       unsigned int timeout;
+       unsigned long flags;
+       int ret;
+
+       spin_lock_irqsave(&nc->lock, flags);
+       timeout = nc->timeout;
+       enabled = nc->enabled;
+       spin_unlock_irqrestore(&nc->lock, flags);
+
+       if (!enabled || !list_empty(&nc->link))
+               return;
+       if (nc->state != NCSI_CHANNEL_INACTIVE &&
+           nc->state != NCSI_CHANNEL_ACTIVE)
+               return;
+
+       if (!(timeout % 2)) {
+               nca.ndp = ndp;
+               nca.package = np->id;
+               nca.channel = nc->id;
+               nca.type = NCSI_PKT_CMD_GLS;
+               nca.driven = false;
+               ret = ncsi_xmit_cmd(&nca);
+               if (ret) {
+                       netdev_err(ndp->ndev.dev, "Error %d sending GLS\n",
+                                  ret);
+                       return;
+               }
+       }
+
+       if (timeout + 1 >= 3) {
+               if (!(ndp->flags & NCSI_DEV_HWA) &&
+                   nc->state == NCSI_CHANNEL_ACTIVE)
+                       ncsi_report_link(ndp, true);
+
+               spin_lock_irqsave(&ndp->lock, flags);
+               xchg(&nc->state, NCSI_CHANNEL_INACTIVE);
+               list_add_tail_rcu(&nc->link, &ndp->channel_queue);
+               spin_unlock_irqrestore(&ndp->lock, flags);
+               ncsi_process_next_channel(ndp);
+               return;
+       }
+
+       spin_lock_irqsave(&nc->lock, flags);
+       nc->timeout = timeout + 1;
+       nc->enabled = true;
+       spin_unlock_irqrestore(&nc->lock, flags);
+       mod_timer(&nc->timer, jiffies + HZ * (1 << (nc->timeout / 2)));
+}
+
+void ncsi_start_channel_monitor(struct ncsi_channel *nc)
+{
+       unsigned long flags;
+
+       spin_lock_irqsave(&nc->lock, flags);
+       WARN_ON_ONCE(nc->enabled);
+       nc->timeout = 0;
+       nc->enabled = true;
+       spin_unlock_irqrestore(&nc->lock, flags);
+
+       mod_timer(&nc->timer, jiffies + HZ * (1 << (nc->timeout / 2)));
+}
+
+void ncsi_stop_channel_monitor(struct ncsi_channel *nc)
+{
+       unsigned long flags;
+
+       spin_lock_irqsave(&nc->lock, flags);
+       if (!nc->enabled) {
+               spin_unlock_irqrestore(&nc->lock, flags);
+               return;
+       }
+       nc->enabled = false;
+       spin_unlock_irqrestore(&nc->lock, flags);
+
+       del_timer_sync(&nc->timer);
+}
+
+struct ncsi_channel *ncsi_find_channel(struct ncsi_package *np,
+                                      unsigned char id)
+{
+       struct ncsi_channel *nc;
+
+       NCSI_FOR_EACH_CHANNEL(np, nc) {
+               if (nc->id == id)
+                       return nc;
+       }
+
+       return NULL;
+}
+
+struct ncsi_channel *ncsi_add_channel(struct ncsi_package *np, unsigned char id)
+{
+       struct ncsi_channel *nc, *tmp;
+       int index;
+       unsigned long flags;
+
+       nc = kzalloc(sizeof(*nc), GFP_ATOMIC);
+       if (!nc)
+               return NULL;
+
+       nc->id = id;
+       nc->package = np;
+       nc->state = NCSI_CHANNEL_INACTIVE;
+       nc->enabled = false;
+       setup_timer(&nc->timer, ncsi_channel_monitor, (unsigned long)nc);
+       spin_lock_init(&nc->lock);
+       INIT_LIST_HEAD(&nc->link);
+       for (index = 0; index < NCSI_CAP_MAX; index++)
+               nc->caps[index].index = index;
+       for (index = 0; index < NCSI_MODE_MAX; index++)
+               nc->modes[index].index = index;
+
+       spin_lock_irqsave(&np->lock, flags);
+       tmp = ncsi_find_channel(np, id);
+       if (tmp) {
+               spin_unlock_irqrestore(&np->lock, flags);
+               kfree(nc);
+               return tmp;
+       }
+
+       list_add_tail_rcu(&nc->node, &np->channels);
+       np->channel_num++;
+       spin_unlock_irqrestore(&np->lock, flags);
+
+       return nc;
+}
+
+static void ncsi_remove_channel(struct ncsi_channel *nc)
+{
+       struct ncsi_package *np = nc->package;
+       struct ncsi_channel_filter *ncf;
+       unsigned long flags;
+       int i;
+
+       /* Release filters */
+       spin_lock_irqsave(&nc->lock, flags);
+       for (i = 0; i < NCSI_FILTER_MAX; i++) {
+               ncf = nc->filters[i];
+               if (!ncf)
+                       continue;
+
+               nc->filters[i] = NULL;
+               kfree(ncf);
+       }
+
+       nc->state = NCSI_CHANNEL_INACTIVE;
+       spin_unlock_irqrestore(&nc->lock, flags);
+       ncsi_stop_channel_monitor(nc);
+
+       /* Remove and free channel */
+       spin_lock_irqsave(&np->lock, flags);
+       list_del_rcu(&nc->node);
+       np->channel_num--;
+       spin_unlock_irqrestore(&np->lock, flags);
+
+       kfree(nc);
+}
+
+struct ncsi_package *ncsi_find_package(struct ncsi_dev_priv *ndp,
+                                      unsigned char id)
+{
+       struct ncsi_package *np;
+
+       NCSI_FOR_EACH_PACKAGE(ndp, np) {
+               if (np->id == id)
+                       return np;
+       }
+
+       return NULL;
+}
+
+struct ncsi_package *ncsi_add_package(struct ncsi_dev_priv *ndp,
+                                     unsigned char id)
+{
+       struct ncsi_package *np, *tmp;
+       unsigned long flags;
+
+       np = kzalloc(sizeof(*np), GFP_ATOMIC);
+       if (!np)
+               return NULL;
+
+       np->id = id;
+       np->ndp = ndp;
+       spin_lock_init(&np->lock);
+       INIT_LIST_HEAD(&np->channels);
+
+       spin_lock_irqsave(&ndp->lock, flags);
+       tmp = ncsi_find_package(ndp, id);
+       if (tmp) {
+               spin_unlock_irqrestore(&ndp->lock, flags);
+               kfree(np);
+               return tmp;
+       }
+
+       list_add_tail_rcu(&np->node, &ndp->packages);
+       ndp->package_num++;
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       return np;
+}
+
+void ncsi_remove_package(struct ncsi_package *np)
+{
+       struct ncsi_dev_priv *ndp = np->ndp;
+       struct ncsi_channel *nc, *tmp;
+       unsigned long flags;
+
+       /* Release all child channels */
+       list_for_each_entry_safe(nc, tmp, &np->channels, node)
+               ncsi_remove_channel(nc);
+
+       /* Remove and free package */
+       spin_lock_irqsave(&ndp->lock, flags);
+       list_del_rcu(&np->node);
+       ndp->package_num--;
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       kfree(np);
+}
+
+void ncsi_find_package_and_channel(struct ncsi_dev_priv *ndp,
+                                  unsigned char id,
+                                  struct ncsi_package **np,
+                                  struct ncsi_channel **nc)
+{
+       struct ncsi_package *p;
+       struct ncsi_channel *c;
+
+       p = ncsi_find_package(ndp, NCSI_PACKAGE_INDEX(id));
+       c = p ? ncsi_find_channel(p, NCSI_CHANNEL_INDEX(id)) : NULL;
+
+       if (np)
+               *np = p;
+       if (nc)
+               *nc = c;
+}
+
+/* For two consecutive NCSI commands, the packet IDs shouldn't
+ * be same. Otherwise, the bogus response might be replied. So
+ * the available IDs are allocated in round-robin fashion.
+ */
+struct ncsi_request *ncsi_alloc_request(struct ncsi_dev_priv *ndp, bool driven)
+{
+       struct ncsi_request *nr = NULL;
+       int i, limit = ARRAY_SIZE(ndp->requests);
+       unsigned long flags;
+
+       /* Check if there is one available request until the ceiling */
+       spin_lock_irqsave(&ndp->lock, flags);
+       for (i = ndp->request_id; !nr && i < limit; i++) {
+               if (ndp->requests[i].used)
+                       continue;
+
+               nr = &ndp->requests[i];
+               nr->used = true;
+               nr->driven = driven;
+               if (++ndp->request_id >= limit)
+                       ndp->request_id = 0;
+       }
+
+       /* Fail back to check from the starting cursor */
+       for (i = 0; !nr && i < ndp->request_id; i++) {
+               if (ndp->requests[i].used)
+                       continue;
+
+               nr = &ndp->requests[i];
+               nr->used = true;
+               nr->driven = driven;
+               if (++ndp->request_id >= limit)
+                       ndp->request_id = 0;
+       }
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       return nr;
+}
+
+void ncsi_free_request(struct ncsi_request *nr)
+{
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct sk_buff *cmd, *rsp;
+       unsigned long flags;
+       bool driven;
+
+       if (nr->enabled) {
+               nr->enabled = false;
+               del_timer_sync(&nr->timer);
+       }
+
+       spin_lock_irqsave(&ndp->lock, flags);
+       cmd = nr->cmd;
+       rsp = nr->rsp;
+       nr->cmd = NULL;
+       nr->rsp = NULL;
+       nr->used = false;
+       driven = nr->driven;
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       if (driven && cmd && --ndp->pending_req_num == 0)
+               schedule_work(&ndp->work);
+
+       /* Release command and response */
+       consume_skb(cmd);
+       consume_skb(rsp);
+}
+
+struct ncsi_dev *ncsi_find_dev(struct net_device *dev)
+{
+       struct ncsi_dev_priv *ndp;
+
+       NCSI_FOR_EACH_DEV(ndp) {
+               if (ndp->ndev.dev == dev)
+                       return &ndp->ndev;
+       }
+
+       return NULL;
+}
+
+static void ncsi_request_timeout(unsigned long data)
+{
+       struct ncsi_request *nr = (struct ncsi_request *)data;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       unsigned long flags;
+
+       /* If the request already had associated response,
+        * let the response handler to release it.
+        */
+       spin_lock_irqsave(&ndp->lock, flags);
+       nr->enabled = false;
+       if (nr->rsp || !nr->cmd) {
+               spin_unlock_irqrestore(&ndp->lock, flags);
+               return;
+       }
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       /* Release the request */
+       ncsi_free_request(nr);
+}
+
+static void ncsi_suspend_channel(struct ncsi_dev_priv *ndp)
+{
+       struct ncsi_dev *nd = &ndp->ndev;
+       struct ncsi_package *np = ndp->active_package;
+       struct ncsi_channel *nc = ndp->active_channel;
+       struct ncsi_cmd_arg nca;
+       int ret;
+
+       nca.ndp = ndp;
+       nca.driven = true;
+       switch (nd->state) {
+       case ncsi_dev_state_suspend:
+               nd->state = ncsi_dev_state_suspend_select;
+               /* Fall through */
+       case ncsi_dev_state_suspend_select:
+       case ncsi_dev_state_suspend_dcnt:
+       case ncsi_dev_state_suspend_dc:
+       case ncsi_dev_state_suspend_deselect:
+               ndp->pending_req_num = 1;
+
+               np = ndp->active_package;
+               nc = ndp->active_channel;
+               nca.package = np->id;
+               if (nd->state == ncsi_dev_state_suspend_select) {
+                       nca.type = NCSI_PKT_CMD_SP;
+                       nca.channel = 0x1f;
+                       if (ndp->flags & NCSI_DEV_HWA)
+                               nca.bytes[0] = 0;
+                       else
+                               nca.bytes[0] = 1;
+                       nd->state = ncsi_dev_state_suspend_dcnt;
+               } else if (nd->state == ncsi_dev_state_suspend_dcnt) {
+                       nca.type = NCSI_PKT_CMD_DCNT;
+                       nca.channel = nc->id;
+                       nd->state = ncsi_dev_state_suspend_dc;
+               } else if (nd->state == ncsi_dev_state_suspend_dc) {
+                       nca.type = NCSI_PKT_CMD_DC;
+                       nca.channel = nc->id;
+                       nca.bytes[0] = 1;
+                       nd->state = ncsi_dev_state_suspend_deselect;
+               } else if (nd->state == ncsi_dev_state_suspend_deselect) {
+                       nca.type = NCSI_PKT_CMD_DP;
+                       nca.channel = 0x1f;
+                       nd->state = ncsi_dev_state_suspend_done;
+               }
+
+               ret = ncsi_xmit_cmd(&nca);
+               if (ret) {
+                       nd->state = ncsi_dev_state_functional;
+                       return;
+               }
+
+               break;
+       case ncsi_dev_state_suspend_done:
+               xchg(&nc->state, NCSI_CHANNEL_INACTIVE);
+               ncsi_process_next_channel(ndp);
+
+               break;
+       default:
+               netdev_warn(nd->dev, "Wrong NCSI state 0x%x in suspend\n",
+                           nd->state);
+       }
+}
+
+static void ncsi_configure_channel(struct ncsi_dev_priv *ndp)
+{
+       struct ncsi_dev *nd = &ndp->ndev;
+       struct net_device *dev = nd->dev;
+       struct ncsi_package *np = ndp->active_package;
+       struct ncsi_channel *nc = ndp->active_channel;
+       struct ncsi_cmd_arg nca;
+       unsigned char index;
+       int ret;
+
+       nca.ndp = ndp;
+       nca.driven = true;
+       switch (nd->state) {
+       case ncsi_dev_state_config:
+       case ncsi_dev_state_config_sp:
+               ndp->pending_req_num = 1;
+
+               /* Select the specific package */
+               nca.type = NCSI_PKT_CMD_SP;
+               if (ndp->flags & NCSI_DEV_HWA)
+                       nca.bytes[0] = 0;
+               else
+                       nca.bytes[0] = 1;
+               nca.package = np->id;
+               nca.channel = 0x1f;
+               ret = ncsi_xmit_cmd(&nca);
+               if (ret)
+                       goto error;
+
+               nd->state = ncsi_dev_state_config_cis;
+               break;
+       case ncsi_dev_state_config_cis:
+               ndp->pending_req_num = 1;
+
+               /* Clear initial state */
+               nca.type = NCSI_PKT_CMD_CIS;
+               nca.package = np->id;
+               nca.channel = nc->id;
+               ret = ncsi_xmit_cmd(&nca);
+               if (ret)
+                       goto error;
+
+               nd->state = ncsi_dev_state_config_sma;
+               break;
+       case ncsi_dev_state_config_sma:
+       case ncsi_dev_state_config_ebf:
+#if IS_ENABLED(CONFIG_IPV6)
+       case ncsi_dev_state_config_egmf:
+#endif
+       case ncsi_dev_state_config_ecnt:
+       case ncsi_dev_state_config_ec:
+       case ncsi_dev_state_config_ae:
+       case ncsi_dev_state_config_gls:
+               ndp->pending_req_num = 1;
+
+               nca.package = np->id;
+               nca.channel = nc->id;
+
+               /* Use first entry in unicast filter table. Note that
+                * the MAC filter table starts from entry 1 instead of
+                * 0.
+                */
+               if (nd->state == ncsi_dev_state_config_sma) {
+                       nca.type = NCSI_PKT_CMD_SMA;
+                       for (index = 0; index < 6; index++)
+                               nca.bytes[index] = dev->dev_addr[index];
+                       nca.bytes[6] = 0x1;
+                       nca.bytes[7] = 0x1;
+                       nd->state = ncsi_dev_state_config_ebf;
+               } else if (nd->state == ncsi_dev_state_config_ebf) {
+                       nca.type = NCSI_PKT_CMD_EBF;
+                       nca.dwords[0] = nc->caps[NCSI_CAP_BC].cap;
+                       nd->state = ncsi_dev_state_config_ecnt;
+#if IS_ENABLED(CONFIG_IPV6)
+                       if (ndp->inet6_addr_num > 0 &&
+                           (nc->caps[NCSI_CAP_GENERIC].cap &
+                            NCSI_CAP_GENERIC_MC))
+                               nd->state = ncsi_dev_state_config_egmf;
+                       else
+                               nd->state = ncsi_dev_state_config_ecnt;
+               } else if (nd->state == ncsi_dev_state_config_egmf) {
+                       nca.type = NCSI_PKT_CMD_EGMF;
+                       nca.dwords[0] = nc->caps[NCSI_CAP_MC].cap;
+                       nd->state = ncsi_dev_state_config_ecnt;
+#endif /* CONFIG_IPV6 */
+               } else if (nd->state == ncsi_dev_state_config_ecnt) {
+                       nca.type = NCSI_PKT_CMD_ECNT;
+                       nd->state = ncsi_dev_state_config_ec;
+               } else if (nd->state == ncsi_dev_state_config_ec) {
+                       /* Enable AEN if it's supported */
+                       nca.type = NCSI_PKT_CMD_EC;
+                       nd->state = ncsi_dev_state_config_ae;
+                       if (!(nc->caps[NCSI_CAP_AEN].cap & NCSI_CAP_AEN_MASK))
+                               nd->state = ncsi_dev_state_config_gls;
+               } else if (nd->state == ncsi_dev_state_config_ae) {
+                       nca.type = NCSI_PKT_CMD_AE;
+                       nca.bytes[0] = 0;
+                       nca.dwords[1] = nc->caps[NCSI_CAP_AEN].cap;
+                       nd->state = ncsi_dev_state_config_gls;
+               } else if (nd->state == ncsi_dev_state_config_gls) {
+                       nca.type = NCSI_PKT_CMD_GLS;
+                       nd->state = ncsi_dev_state_config_done;
+               }
+
+               ret = ncsi_xmit_cmd(&nca);
+               if (ret)
+                       goto error;
+               break;
+       case ncsi_dev_state_config_done:
+               if (nc->modes[NCSI_MODE_LINK].data[2] & 0x1)
+                       xchg(&nc->state, NCSI_CHANNEL_ACTIVE);
+               else
+                       xchg(&nc->state, NCSI_CHANNEL_INACTIVE);
+
+               ncsi_start_channel_monitor(nc);
+               ncsi_process_next_channel(ndp);
+               break;
+       default:
+               netdev_warn(dev, "Wrong NCSI state 0x%x in config\n",
+                           nd->state);
+       }
+
+       return;
+
+error:
+       ncsi_report_link(ndp, true);
+}
+
+static int ncsi_choose_active_channel(struct ncsi_dev_priv *ndp)
+{
+       struct ncsi_package *np;
+       struct ncsi_channel *nc, *found;
+       struct ncsi_channel_mode *ncm;
+       unsigned long flags;
+
+       /* The search is done once an inactive channel with up
+        * link is found.
+        */
+       found = NULL;
+       NCSI_FOR_EACH_PACKAGE(ndp, np) {
+               NCSI_FOR_EACH_CHANNEL(np, nc) {
+                       if (!list_empty(&nc->link) ||
+                           nc->state != NCSI_CHANNEL_INACTIVE)
+                               continue;
+
+                       if (!found)
+                               found = nc;
+
+                       ncm = &nc->modes[NCSI_MODE_LINK];
+                       if (ncm->data[2] & 0x1) {
+                               found = nc;
+                               goto out;
+                       }
+               }
+       }
+
+       if (!found) {
+               ncsi_report_link(ndp, true);
+               return -ENODEV;
+       }
+
+out:
+       spin_lock_irqsave(&ndp->lock, flags);
+       list_add_tail_rcu(&found->link, &ndp->channel_queue);
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       return ncsi_process_next_channel(ndp);
+}
+
+static bool ncsi_check_hwa(struct ncsi_dev_priv *ndp)
+{
+       struct ncsi_package *np;
+       struct ncsi_channel *nc;
+       unsigned int cap;
+
+       /* The hardware arbitration is disabled if any one channel
+        * doesn't support explicitly.
+        */
+       NCSI_FOR_EACH_PACKAGE(ndp, np) {
+               NCSI_FOR_EACH_CHANNEL(np, nc) {
+                       cap = nc->caps[NCSI_CAP_GENERIC].cap;
+                       if (!(cap & NCSI_CAP_GENERIC_HWA) ||
+                           (cap & NCSI_CAP_GENERIC_HWA_MASK) !=
+                           NCSI_CAP_GENERIC_HWA_SUPPORT) {
+                               ndp->flags &= ~NCSI_DEV_HWA;
+                               return false;
+                       }
+               }
+       }
+
+       ndp->flags |= NCSI_DEV_HWA;
+       return true;
+}
+
+static int ncsi_enable_hwa(struct ncsi_dev_priv *ndp)
+{
+       struct ncsi_package *np;
+       struct ncsi_channel *nc;
+       unsigned long flags;
+
+       /* Move all available channels to processing queue */
+       spin_lock_irqsave(&ndp->lock, flags);
+       NCSI_FOR_EACH_PACKAGE(ndp, np) {
+               NCSI_FOR_EACH_CHANNEL(np, nc) {
+                       WARN_ON_ONCE(nc->state != NCSI_CHANNEL_INACTIVE ||
+                                    !list_empty(&nc->link));
+                       ncsi_stop_channel_monitor(nc);
+                       list_add_tail_rcu(&nc->link, &ndp->channel_queue);
+               }
+       }
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       /* We can have no channels in extremely case */
+       if (list_empty(&ndp->channel_queue)) {
+               ncsi_report_link(ndp, false);
+               return -ENOENT;
+       }
+
+       return ncsi_process_next_channel(ndp);
+}
+
+static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
+{
+       struct ncsi_dev *nd = &ndp->ndev;
+       struct ncsi_package *np;
+       struct ncsi_channel *nc;
+       struct ncsi_cmd_arg nca;
+       unsigned char index;
+       int ret;
+
+       nca.ndp = ndp;
+       nca.driven = true;
+       switch (nd->state) {
+       case ncsi_dev_state_probe:
+               nd->state = ncsi_dev_state_probe_deselect;
+               /* Fall through */
+       case ncsi_dev_state_probe_deselect:
+               ndp->pending_req_num = 8;
+
+               /* Deselect all possible packages */
+               nca.type = NCSI_PKT_CMD_DP;
+               nca.channel = 0x1f;
+               for (index = 0; index < 8; index++) {
+                       nca.package = index;
+                       ret = ncsi_xmit_cmd(&nca);
+                       if (ret)
+                               goto error;
+               }
+
+               nd->state = ncsi_dev_state_probe_package;
+               break;
+       case ncsi_dev_state_probe_package:
+               ndp->pending_req_num = 16;
+
+               /* Select all possible packages */
+               nca.type = NCSI_PKT_CMD_SP;
+               nca.bytes[0] = 1;
+               nca.channel = 0x1f;
+               for (index = 0; index < 8; index++) {
+                       nca.package = index;
+                       ret = ncsi_xmit_cmd(&nca);
+                       if (ret)
+                               goto error;
+               }
+
+               /* Disable all possible packages */
+               nca.type = NCSI_PKT_CMD_DP;
+               for (index = 0; index < 8; index++) {
+                       nca.package = index;
+                       ret = ncsi_xmit_cmd(&nca);
+                       if (ret)
+                               goto error;
+               }
+
+               nd->state = ncsi_dev_state_probe_channel;
+               break;
+       case ncsi_dev_state_probe_channel:
+               if (!ndp->active_package)
+                       ndp->active_package = list_first_or_null_rcu(
+                               &ndp->packages, struct ncsi_package, node);
+               else if (list_is_last(&ndp->active_package->node,
+                                     &ndp->packages))
+                       ndp->active_package = NULL;
+               else
+                       ndp->active_package = list_next_entry(
+                               ndp->active_package, node);
+
+               /* All available packages and channels are enumerated. The
+                * enumeration happens for once when the NCSI interface is
+                * started. So we need continue to start the interface after
+                * the enumeration.
+                *
+                * We have to choose an active channel before configuring it.
+                * Note that we possibly don't have active channel in extreme
+                * situation.
+                */
+               if (!ndp->active_package) {
+                       ndp->flags |= NCSI_DEV_PROBED;
+                       if (ncsi_check_hwa(ndp))
+                               ncsi_enable_hwa(ndp);
+                       else
+                               ncsi_choose_active_channel(ndp);
+                       return;
+               }
+
+               /* Select the active package */
+               ndp->pending_req_num = 1;
+               nca.type = NCSI_PKT_CMD_SP;
+               nca.bytes[0] = 1;
+               nca.package = ndp->active_package->id;
+               nca.channel = 0x1f;
+               ret = ncsi_xmit_cmd(&nca);
+               if (ret)
+                       goto error;
+
+               nd->state = ncsi_dev_state_probe_cis;
+               break;
+       case ncsi_dev_state_probe_cis:
+               ndp->pending_req_num = 32;
+
+               /* Clear initial state */
+               nca.type = NCSI_PKT_CMD_CIS;
+               nca.package = ndp->active_package->id;
+               for (index = 0; index < 0x20; index++) {
+                       nca.channel = index;
+                       ret = ncsi_xmit_cmd(&nca);
+                       if (ret)
+                               goto error;
+               }
+
+               nd->state = ncsi_dev_state_probe_gvi;
+               break;
+       case ncsi_dev_state_probe_gvi:
+       case ncsi_dev_state_probe_gc:
+       case ncsi_dev_state_probe_gls:
+               np = ndp->active_package;
+               ndp->pending_req_num = np->channel_num;
+
+               /* Retrieve version, capability or link status */
+               if (nd->state == ncsi_dev_state_probe_gvi)
+                       nca.type = NCSI_PKT_CMD_GVI;
+               else if (nd->state == ncsi_dev_state_probe_gc)
+                       nca.type = NCSI_PKT_CMD_GC;
+               else
+                       nca.type = NCSI_PKT_CMD_GLS;
+
+               nca.package = np->id;
+               NCSI_FOR_EACH_CHANNEL(np, nc) {
+                       nca.channel = nc->id;
+                       ret = ncsi_xmit_cmd(&nca);
+                       if (ret)
+                               goto error;
+               }
+
+               if (nd->state == ncsi_dev_state_probe_gvi)
+                       nd->state = ncsi_dev_state_probe_gc;
+               else if (nd->state == ncsi_dev_state_probe_gc)
+                       nd->state = ncsi_dev_state_probe_gls;
+               else
+                       nd->state = ncsi_dev_state_probe_dp;
+               break;
+       case ncsi_dev_state_probe_dp:
+               ndp->pending_req_num = 1;
+
+               /* Deselect the active package */
+               nca.type = NCSI_PKT_CMD_DP;
+               nca.package = ndp->active_package->id;
+               nca.channel = 0x1f;
+               ret = ncsi_xmit_cmd(&nca);
+               if (ret)
+                       goto error;
+
+               /* Scan channels in next package */
+               nd->state = ncsi_dev_state_probe_channel;
+               break;
+       default:
+               netdev_warn(nd->dev, "Wrong NCSI state 0x%0x in enumeration\n",
+                           nd->state);
+       }
+
+       return;
+error:
+       ncsi_report_link(ndp, true);
+}
+
+static void ncsi_dev_work(struct work_struct *work)
+{
+       struct ncsi_dev_priv *ndp = container_of(work,
+                       struct ncsi_dev_priv, work);
+       struct ncsi_dev *nd = &ndp->ndev;
+
+       switch (nd->state & ncsi_dev_state_major) {
+       case ncsi_dev_state_probe:
+               ncsi_probe_channel(ndp);
+               break;
+       case ncsi_dev_state_suspend:
+               ncsi_suspend_channel(ndp);
+               break;
+       case ncsi_dev_state_config:
+               ncsi_configure_channel(ndp);
+               break;
+       default:
+               netdev_warn(nd->dev, "Wrong NCSI state 0x%x in workqueue\n",
+                           nd->state);
+       }
+}
+
+int ncsi_process_next_channel(struct ncsi_dev_priv *ndp)
+{
+       struct ncsi_channel *nc;
+       int old_state;
+       unsigned long flags;
+
+       spin_lock_irqsave(&ndp->lock, flags);
+       nc = list_first_or_null_rcu(&ndp->channel_queue,
+                                   struct ncsi_channel, link);
+       if (nc) {
+               old_state = xchg(&nc->state, NCSI_CHANNEL_INVISIBLE);
+               list_del_init(&nc->link);
+       }
+       spin_unlock_irqrestore(&ndp->lock, flags);
+
+       ndp->active_channel = nc;
+       ndp->active_package = nc ? nc->package : NULL;
+       if (!nc) {
+               if (ndp->flags & NCSI_DEV_RESHUFFLE) {
+                       ndp->flags &= ~NCSI_DEV_RESHUFFLE;
+                       return ncsi_choose_active_channel(ndp);
+               }
+
+               ncsi_report_link(ndp, false);
+               return -ENODEV;
+       }
+
+       switch (old_state) {
+       case NCSI_CHANNEL_INACTIVE:
+               ndp->ndev.state = ncsi_dev_state_config;
+               ncsi_configure_channel(ndp);
+               break;
+       case NCSI_CHANNEL_ACTIVE:
+               ndp->ndev.state = ncsi_dev_state_suspend;
+               ncsi_suspend_channel(ndp);
+               break;
+       default:
+               netdev_err(ndp->ndev.dev, "Invalid state 0x%x on %d:%d\n",
+                          nc->state, nc->package->id, nc->id);
+               ncsi_report_link(ndp, false);
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
+#if IS_ENABLED(CONFIG_IPV6)
+static int ncsi_inet6addr_event(struct notifier_block *this,
+                               unsigned long event, void *data)
+{
+       struct inet6_ifaddr *ifa = data;
+       struct net_device *dev = ifa->idev->dev;
+       struct ncsi_dev *nd = ncsi_find_dev(dev);
+       struct ncsi_dev_priv *ndp = nd ? TO_NCSI_DEV_PRIV(nd) : NULL;
+       struct ncsi_package *np;
+       struct ncsi_channel *nc;
+       struct ncsi_cmd_arg nca;
+       bool action;
+       int ret;
+
+       if (!ndp || (ipv6_addr_type(&ifa->addr) &
+           (IPV6_ADDR_LINKLOCAL | IPV6_ADDR_LOOPBACK)))
+               return NOTIFY_OK;
+
+       switch (event) {
+       case NETDEV_UP:
+               action = (++ndp->inet6_addr_num) == 1;
+               nca.type = NCSI_PKT_CMD_EGMF;
+               break;
+       case NETDEV_DOWN:
+               action = (--ndp->inet6_addr_num == 0);
+               nca.type = NCSI_PKT_CMD_DGMF;
+               break;
+       default:
+               return NOTIFY_OK;
+       }
+
+       /* We might not have active channel or packages. The IPv6
+        * required multicast will be enabled when active channel
+        * or packages are chosen.
+        */
+       np = ndp->active_package;
+       nc = ndp->active_channel;
+       if (!action || !np || !nc)
+               return NOTIFY_OK;
+
+       /* We needn't enable or disable it if the function isn't supported */
+       if (!(nc->caps[NCSI_CAP_GENERIC].cap & NCSI_CAP_GENERIC_MC))
+               return NOTIFY_OK;
+
+       nca.ndp = ndp;
+       nca.driven = false;
+       nca.package = np->id;
+       nca.channel = nc->id;
+       nca.dwords[0] = nc->caps[NCSI_CAP_MC].cap;
+       ret = ncsi_xmit_cmd(&nca);
+       if (ret) {
+               netdev_warn(dev, "Fail to %s global multicast filter (%d)\n",
+                           (event == NETDEV_UP) ? "enable" : "disable", ret);
+               return NOTIFY_DONE;
+       }
+
+       return NOTIFY_OK;
+}
+
+static struct notifier_block ncsi_inet6addr_notifier = {
+       .notifier_call = ncsi_inet6addr_event,
+};
+#endif /* CONFIG_IPV6 */
+
+struct ncsi_dev *ncsi_register_dev(struct net_device *dev,
+                                  void (*handler)(struct ncsi_dev *ndev))
+{
+       struct ncsi_dev_priv *ndp;
+       struct ncsi_dev *nd;
+       unsigned long flags;
+       int i;
+
+       /* Check if the device has been registered or not */
+       nd = ncsi_find_dev(dev);
+       if (nd)
+               return nd;
+
+       /* Create NCSI device */
+       ndp = kzalloc(sizeof(*ndp), GFP_ATOMIC);
+       if (!ndp)
+               return NULL;
+
+       nd = &ndp->ndev;
+       nd->state = ncsi_dev_state_registered;
+       nd->dev = dev;
+       nd->handler = handler;
+       ndp->pending_req_num = 0;
+       INIT_LIST_HEAD(&ndp->channel_queue);
+       INIT_WORK(&ndp->work, ncsi_dev_work);
+
+       /* Initialize private NCSI device */
+       spin_lock_init(&ndp->lock);
+       INIT_LIST_HEAD(&ndp->packages);
+       ndp->request_id = 0;
+       for (i = 0; i < ARRAY_SIZE(ndp->requests); i++) {
+               ndp->requests[i].id = i;
+               ndp->requests[i].ndp = ndp;
+               setup_timer(&ndp->requests[i].timer,
+                           ncsi_request_timeout,
+                           (unsigned long)&ndp->requests[i]);
+       }
+
+       spin_lock_irqsave(&ncsi_dev_lock, flags);
+#if IS_ENABLED(CONFIG_IPV6)
+       ndp->inet6_addr_num = 0;
+       if (list_empty(&ncsi_dev_list))
+               register_inet6addr_notifier(&ncsi_inet6addr_notifier);
+#endif
+       list_add_tail_rcu(&ndp->node, &ncsi_dev_list);
+       spin_unlock_irqrestore(&ncsi_dev_lock, flags);
+
+       /* Register NCSI packet Rx handler */
+       ndp->ptype.type = cpu_to_be16(ETH_P_NCSI);
+       ndp->ptype.func = ncsi_rcv_rsp;
+       ndp->ptype.dev = dev;
+       dev_add_pack(&ndp->ptype);
+
+       return nd;
+}
+EXPORT_SYMBOL_GPL(ncsi_register_dev);
+
+int ncsi_start_dev(struct ncsi_dev *nd)
+{
+       struct ncsi_dev_priv *ndp = TO_NCSI_DEV_PRIV(nd);
+       struct ncsi_package *np;
+       struct ncsi_channel *nc;
+       int old_state, ret;
+
+       if (nd->state != ncsi_dev_state_registered &&
+           nd->state != ncsi_dev_state_functional)
+               return -ENOTTY;
+
+       if (!(ndp->flags & NCSI_DEV_PROBED)) {
+               nd->state = ncsi_dev_state_probe;
+               schedule_work(&ndp->work);
+               return 0;
+       }
+
+       /* Reset channel's state and start over */
+       NCSI_FOR_EACH_PACKAGE(ndp, np) {
+               NCSI_FOR_EACH_CHANNEL(np, nc) {
+                       old_state = xchg(&nc->state, NCSI_CHANNEL_INACTIVE);
+                       WARN_ON_ONCE(!list_empty(&nc->link) ||
+                                    old_state == NCSI_CHANNEL_INVISIBLE);
+               }
+       }
+
+       if (ndp->flags & NCSI_DEV_HWA)
+               ret = ncsi_enable_hwa(ndp);
+       else
+               ret = ncsi_choose_active_channel(ndp);
+
+       return ret;
+}
+EXPORT_SYMBOL_GPL(ncsi_start_dev);
+
+void ncsi_unregister_dev(struct ncsi_dev *nd)
+{
+       struct ncsi_dev_priv *ndp = TO_NCSI_DEV_PRIV(nd);
+       struct ncsi_package *np, *tmp;
+       unsigned long flags;
+
+       dev_remove_pack(&ndp->ptype);
+
+       list_for_each_entry_safe(np, tmp, &ndp->packages, node)
+               ncsi_remove_package(np);
+
+       spin_lock_irqsave(&ncsi_dev_lock, flags);
+       list_del_rcu(&ndp->node);
+#if IS_ENABLED(CONFIG_IPV6)
+       if (list_empty(&ncsi_dev_list))
+               unregister_inet6addr_notifier(&ncsi_inet6addr_notifier);
+#endif
+       spin_unlock_irqrestore(&ncsi_dev_lock, flags);
+
+       kfree(ndp);
+}
+EXPORT_SYMBOL_GPL(ncsi_unregister_dev);
diff --git a/net/ncsi/ncsi-pkt.h b/net/ncsi/ncsi-pkt.h
new file mode 100644 (file)
index 0000000..3ea49ed
--- /dev/null
@@ -0,0 +1,415 @@
+/*
+ * Copyright Gavin Shan, IBM Corporation 2016.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __NCSI_PKT_H__
+#define __NCSI_PKT_H__
+
+struct ncsi_pkt_hdr {
+       unsigned char mc_id;        /* Management controller ID */
+       unsigned char revision;     /* NCSI version - 0x01      */
+       unsigned char reserved;     /* Reserved                 */
+       unsigned char id;           /* Packet sequence number   */
+       unsigned char type;         /* Packet type              */
+       unsigned char channel;      /* Network controller ID    */
+       __be16        length;       /* Payload length           */
+       __be32        reserved1[2]; /* Reserved                 */
+};
+
+struct ncsi_cmd_pkt_hdr {
+       struct ncsi_pkt_hdr common; /* Common NCSI packet header */
+};
+
+struct ncsi_rsp_pkt_hdr {
+       struct ncsi_pkt_hdr common; /* Common NCSI packet header */
+       __be16              code;   /* Response code             */
+       __be16              reason; /* Response reason           */
+};
+
+struct ncsi_aen_pkt_hdr {
+       struct ncsi_pkt_hdr common;       /* Common NCSI packet header */
+       unsigned char       reserved2[3]; /* Reserved                  */
+       unsigned char       type;         /* AEN packet type           */
+};
+
+/* NCSI common command packet */
+struct ncsi_cmd_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;      /* Command header */
+       __be32                  checksum; /* Checksum       */
+       unsigned char           pad[26];
+};
+
+struct ncsi_rsp_pkt {
+       struct ncsi_rsp_pkt_hdr rsp;      /* Response header */
+       __be32                  checksum; /* Checksum        */
+       unsigned char           pad[22];
+};
+
+/* Select Package */
+struct ncsi_cmd_sp_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;            /* Command header */
+       unsigned char           reserved[3];    /* Reserved       */
+       unsigned char           hw_arbitration; /* HW arbitration */
+       __be32                  checksum;       /* Checksum       */
+       unsigned char           pad[22];
+};
+
+/* Disable Channel */
+struct ncsi_cmd_dc_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;         /* Command header  */
+       unsigned char           reserved[3]; /* Reserved        */
+       unsigned char           ald;         /* Allow link down */
+       __be32                  checksum;    /* Checksum        */
+       unsigned char           pad[22];
+};
+
+/* Reset Channel */
+struct ncsi_cmd_rc_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;      /* Command header */
+       __be32                  reserved; /* Reserved       */
+       __be32                  checksum; /* Checksum       */
+       unsigned char           pad[22];
+};
+
+/* AEN Enable */
+struct ncsi_cmd_ae_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;         /* Command header   */
+       unsigned char           reserved[3]; /* Reserved         */
+       unsigned char           mc_id;       /* MC ID            */
+       __be32                  mode;        /* AEN working mode */
+       __be32                  checksum;    /* Checksum         */
+       unsigned char           pad[18];
+};
+
+/* Set Link */
+struct ncsi_cmd_sl_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;      /* Command header    */
+       __be32                  mode;     /* Link working mode */
+       __be32                  oem_mode; /* OEM link mode     */
+       __be32                  checksum; /* Checksum          */
+       unsigned char           pad[18];
+};
+
+/* Set VLAN Filter */
+struct ncsi_cmd_svf_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;       /* Command header    */
+       __be16                  reserved;  /* Reserved          */
+       __be16                  vlan;      /* VLAN ID           */
+       __be16                  reserved1; /* Reserved          */
+       unsigned char           index;     /* VLAN table index  */
+       unsigned char           enable;    /* Enable or disable */
+       __be32                  checksum;  /* Checksum          */
+       unsigned char           pad[14];
+};
+
+/* Enable VLAN */
+struct ncsi_cmd_ev_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;         /* Command header   */
+       unsigned char           reserved[3]; /* Reserved         */
+       unsigned char           mode;        /* VLAN filter mode */
+       __be32                  checksum;    /* Checksum         */
+       unsigned char           pad[22];
+};
+
+/* Set MAC Address */
+struct ncsi_cmd_sma_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;      /* Command header          */
+       unsigned char           mac[6];   /* MAC address             */
+       unsigned char           index;    /* MAC table index         */
+       unsigned char           at_e;     /* Addr type and operation */
+       __be32                  checksum; /* Checksum                */
+       unsigned char           pad[18];
+};
+
+/* Enable Broadcast Filter */
+struct ncsi_cmd_ebf_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;      /* Command header */
+       __be32                  mode;     /* Filter mode    */
+       __be32                  checksum; /* Checksum       */
+       unsigned char           pad[22];
+};
+
+/* Enable Global Multicast Filter */
+struct ncsi_cmd_egmf_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;      /* Command header */
+       __be32                  mode;     /* Global MC mode */
+       __be32                  checksum; /* Checksum       */
+       unsigned char           pad[22];
+};
+
+/* Set NCSI Flow Control */
+struct ncsi_cmd_snfc_pkt {
+       struct ncsi_cmd_pkt_hdr cmd;         /* Command header    */
+       unsigned char           reserved[3]; /* Reserved          */
+       unsigned char           mode;        /* Flow control mode */
+       __be32                  checksum;    /* Checksum          */
+       unsigned char           pad[22];
+};
+
+/* Get Link Status */
+struct ncsi_rsp_gls_pkt {
+       struct ncsi_rsp_pkt_hdr rsp;        /* Response header   */
+       __be32                  status;     /* Link status       */
+       __be32                  other;      /* Other indications */
+       __be32                  oem_status; /* OEM link status   */
+       __be32                  checksum;
+       unsigned char           pad[10];
+};
+
+/* Get Version ID */
+struct ncsi_rsp_gvi_pkt {
+       struct ncsi_rsp_pkt_hdr rsp;          /* Response header */
+       __be32                  ncsi_version; /* NCSI version    */
+       unsigned char           reserved[3];  /* Reserved        */
+       unsigned char           alpha2;       /* NCSI version    */
+       unsigned char           fw_name[12];  /* f/w name string */
+       __be32                  fw_version;   /* f/w version     */
+       __be16                  pci_ids[4];   /* PCI IDs         */
+       __be32                  mf_id;        /* Manufacture ID  */
+       __be32                  checksum;
+};
+
+/* Get Capabilities */
+struct ncsi_rsp_gc_pkt {
+       struct ncsi_rsp_pkt_hdr rsp;         /* Response header   */
+       __be32                  cap;         /* Capabilities      */
+       __be32                  bc_cap;      /* Broadcast cap     */
+       __be32                  mc_cap;      /* Multicast cap     */
+       __be32                  buf_cap;     /* Buffering cap     */
+       __be32                  aen_cap;     /* AEN cap           */
+       unsigned char           vlan_cnt;    /* VLAN filter count */
+       unsigned char           mixed_cnt;   /* Mix filter count  */
+       unsigned char           mc_cnt;      /* MC filter count   */
+       unsigned char           uc_cnt;      /* UC filter count   */
+       unsigned char           reserved[2]; /* Reserved          */
+       unsigned char           vlan_mode;   /* VLAN mode         */
+       unsigned char           channel_cnt; /* Channel count     */
+       __be32                  checksum;    /* Checksum          */
+};
+
+/* Get Parameters */
+struct ncsi_rsp_gp_pkt {
+       struct ncsi_rsp_pkt_hdr rsp;          /* Response header       */
+       unsigned char           mac_cnt;      /* Number of MAC addr    */
+       unsigned char           reserved[2];  /* Reserved              */
+       unsigned char           mac_enable;   /* MAC addr enable flags */
+       unsigned char           vlan_cnt;     /* VLAN tag count        */
+       unsigned char           reserved1;    /* Reserved              */
+       __be16                  vlan_enable;  /* VLAN tag enable flags */
+       __be32                  link_mode;    /* Link setting          */
+       __be32                  bc_mode;      /* BC filter mode        */
+       __be32                  valid_modes;  /* Valid mode parameters */
+       unsigned char           vlan_mode;    /* VLAN mode             */
+       unsigned char           fc_mode;      /* Flow control mode     */
+       unsigned char           reserved2[2]; /* Reserved              */
+       __be32                  aen_mode;     /* AEN mode              */
+       unsigned char           mac[6];       /* Supported MAC addr    */
+       __be16                  vlan;         /* Supported VLAN tags   */
+       __be32                  checksum;     /* Checksum              */
+};
+
+/* Get Controller Packet Statistics */
+struct ncsi_rsp_gcps_pkt {
+       struct ncsi_rsp_pkt_hdr rsp;            /* Response header            */
+       __be32                  cnt_hi;         /* Counter cleared            */
+       __be32                  cnt_lo;         /* Counter cleared            */
+       __be32                  rx_bytes;       /* Rx bytes                   */
+       __be32                  tx_bytes;       /* Tx bytes                   */
+       __be32                  rx_uc_pkts;     /* Rx UC packets              */
+       __be32                  rx_mc_pkts;     /* Rx MC packets              */
+       __be32                  rx_bc_pkts;     /* Rx BC packets              */
+       __be32                  tx_uc_pkts;     /* Tx UC packets              */
+       __be32                  tx_mc_pkts;     /* Tx MC packets              */
+       __be32                  tx_bc_pkts;     /* Tx BC packets              */
+       __be32                  fcs_err;        /* FCS errors                 */
+       __be32                  align_err;      /* Alignment errors           */
+       __be32                  false_carrier;  /* False carrier detection    */
+       __be32                  runt_pkts;      /* Rx runt packets            */
+       __be32                  jabber_pkts;    /* Rx jabber packets          */
+       __be32                  rx_pause_xon;   /* Rx pause XON frames        */
+       __be32                  rx_pause_xoff;  /* Rx XOFF frames             */
+       __be32                  tx_pause_xon;   /* Tx XON frames              */
+       __be32                  tx_pause_xoff;  /* Tx XOFF frames             */
+       __be32                  tx_s_collision; /* Single collision frames    */
+       __be32                  tx_m_collision; /* Multiple collision frames  */
+       __be32                  l_collision;    /* Late collision frames      */
+       __be32                  e_collision;    /* Excessive collision frames */
+       __be32                  rx_ctl_frames;  /* Rx control frames          */
+       __be32                  rx_64_frames;   /* Rx 64-bytes frames         */
+       __be32                  rx_127_frames;  /* Rx 65-127 bytes frames     */
+       __be32                  rx_255_frames;  /* Rx 128-255 bytes frames    */
+       __be32                  rx_511_frames;  /* Rx 256-511 bytes frames    */
+       __be32                  rx_1023_frames; /* Rx 512-1023 bytes frames   */
+       __be32                  rx_1522_frames; /* Rx 1024-1522 bytes frames  */
+       __be32                  rx_9022_frames; /* Rx 1523-9022 bytes frames  */
+       __be32                  tx_64_frames;   /* Tx 64-bytes frames         */
+       __be32                  tx_127_frames;  /* Tx 65-127 bytes frames     */
+       __be32                  tx_255_frames;  /* Tx 128-255 bytes frames    */
+       __be32                  tx_511_frames;  /* Tx 256-511 bytes frames    */
+       __be32                  tx_1023_frames; /* Tx 512-1023 bytes frames   */
+       __be32                  tx_1522_frames; /* Tx 1024-1522 bytes frames  */
+       __be32                  tx_9022_frames; /* Tx 1523-9022 bytes frames  */
+       __be32                  rx_valid_bytes; /* Rx valid bytes             */
+       __be32                  rx_runt_pkts;   /* Rx error runt packets      */
+       __be32                  rx_jabber_pkts; /* Rx error jabber packets    */
+       __be32                  checksum;       /* Checksum                   */
+};
+
+/* Get NCSI Statistics */
+struct ncsi_rsp_gns_pkt {
+       struct ncsi_rsp_pkt_hdr rsp;           /* Response header         */
+       __be32                  rx_cmds;       /* Rx NCSI commands        */
+       __be32                  dropped_cmds;  /* Dropped commands        */
+       __be32                  cmd_type_errs; /* Command type errors     */
+       __be32                  cmd_csum_errs; /* Command checksum errors */
+       __be32                  rx_pkts;       /* Rx NCSI packets         */
+       __be32                  tx_pkts;       /* Tx NCSI packets         */
+       __be32                  tx_aen_pkts;   /* Tx AEN packets          */
+       __be32                  checksum;      /* Checksum                */
+};
+
+/* Get NCSI Pass-through Statistics */
+struct ncsi_rsp_gnpts_pkt {
+       struct ncsi_rsp_pkt_hdr rsp;            /* Response header     */
+       __be32                  tx_pkts;        /* Tx packets          */
+       __be32                  tx_dropped;     /* Tx dropped packets  */
+       __be32                  tx_channel_err; /* Tx channel errors   */
+       __be32                  tx_us_err;      /* Tx undersize errors */
+       __be32                  rx_pkts;        /* Rx packets          */
+       __be32                  rx_dropped;     /* Rx dropped packets  */
+       __be32                  rx_channel_err; /* Rx channel errors   */
+       __be32                  rx_us_err;      /* Rx undersize errors */
+       __be32                  rx_os_err;      /* Rx oversize errors  */
+       __be32                  checksum;       /* Checksum            */
+};
+
+/* Get package status */
+struct ncsi_rsp_gps_pkt {
+       struct ncsi_rsp_pkt_hdr rsp;      /* Response header             */
+       __be32                  status;   /* Hardware arbitration status */
+       __be32                  checksum;
+};
+
+/* Get package UUID */
+struct ncsi_rsp_gpuuid_pkt {
+       struct ncsi_rsp_pkt_hdr rsp;      /* Response header */
+       unsigned char           uuid[16]; /* UUID            */
+       __be32                  checksum;
+};
+
+/* AEN: Link State Change */
+struct ncsi_aen_lsc_pkt {
+       struct ncsi_aen_pkt_hdr aen;        /* AEN header      */
+       __be32                  status;     /* Link status     */
+       __be32                  oem_status; /* OEM link status */
+       __be32                  checksum;   /* Checksum        */
+       unsigned char           pad[14];
+};
+
+/* AEN: Configuration Required */
+struct ncsi_aen_cr_pkt {
+       struct ncsi_aen_pkt_hdr aen;      /* AEN header */
+       __be32                  checksum; /* Checksum   */
+       unsigned char           pad[22];
+};
+
+/* AEN: Host Network Controller Driver Status Change */
+struct ncsi_aen_hncdsc_pkt {
+       struct ncsi_aen_pkt_hdr aen;      /* AEN header */
+       __be32                  status;   /* Status     */
+       __be32                  checksum; /* Checksum   */
+       unsigned char           pad[18];
+};
+
+/* NCSI packet revision */
+#define NCSI_PKT_REVISION      0x01
+
+/* NCSI packet commands */
+#define NCSI_PKT_CMD_CIS       0x00 /* Clear Initial State              */
+#define NCSI_PKT_CMD_SP                0x01 /* Select Package                   */
+#define NCSI_PKT_CMD_DP                0x02 /* Deselect Package                 */
+#define NCSI_PKT_CMD_EC                0x03 /* Enable Channel                   */
+#define NCSI_PKT_CMD_DC                0x04 /* Disable Channel                  */
+#define NCSI_PKT_CMD_RC                0x05 /* Reset Channel                    */
+#define NCSI_PKT_CMD_ECNT      0x06 /* Enable Channel Network Tx        */
+#define NCSI_PKT_CMD_DCNT      0x07 /* Disable Channel Network Tx       */
+#define NCSI_PKT_CMD_AE                0x08 /* AEN Enable                       */
+#define NCSI_PKT_CMD_SL                0x09 /* Set Link                         */
+#define NCSI_PKT_CMD_GLS       0x0a /* Get Link                         */
+#define NCSI_PKT_CMD_SVF       0x0b /* Set VLAN Filter                  */
+#define NCSI_PKT_CMD_EV                0x0c /* Enable VLAN                      */
+#define NCSI_PKT_CMD_DV                0x0d /* Disable VLAN                     */
+#define NCSI_PKT_CMD_SMA       0x0e /* Set MAC address                  */
+#define NCSI_PKT_CMD_EBF       0x10 /* Enable Broadcast Filter          */
+#define NCSI_PKT_CMD_DBF       0x11 /* Disable Broadcast Filter         */
+#define NCSI_PKT_CMD_EGMF      0x12 /* Enable Global Multicast Filter   */
+#define NCSI_PKT_CMD_DGMF      0x13 /* Disable Global Multicast Filter  */
+#define NCSI_PKT_CMD_SNFC      0x14 /* Set NCSI Flow Control            */
+#define NCSI_PKT_CMD_GVI       0x15 /* Get Version ID                   */
+#define NCSI_PKT_CMD_GC                0x16 /* Get Capabilities                 */
+#define NCSI_PKT_CMD_GP                0x17 /* Get Parameters                   */
+#define NCSI_PKT_CMD_GCPS      0x18 /* Get Controller Packet Statistics */
+#define NCSI_PKT_CMD_GNS       0x19 /* Get NCSI Statistics              */
+#define NCSI_PKT_CMD_GNPTS     0x1a /* Get NCSI Pass-throu Statistics   */
+#define NCSI_PKT_CMD_GPS       0x1b /* Get package status               */
+#define NCSI_PKT_CMD_OEM       0x50 /* OEM                              */
+#define NCSI_PKT_CMD_PLDM      0x51 /* PLDM request over NCSI over RBT  */
+#define NCSI_PKT_CMD_GPUUID    0x52 /* Get package UUID                 */
+
+/* NCSI packet responses */
+#define NCSI_PKT_RSP_CIS       (NCSI_PKT_CMD_CIS    + 0x80)
+#define NCSI_PKT_RSP_SP                (NCSI_PKT_CMD_SP     + 0x80)
+#define NCSI_PKT_RSP_DP                (NCSI_PKT_CMD_DP     + 0x80)
+#define NCSI_PKT_RSP_EC                (NCSI_PKT_CMD_EC     + 0x80)
+#define NCSI_PKT_RSP_DC                (NCSI_PKT_CMD_DC     + 0x80)
+#define NCSI_PKT_RSP_RC                (NCSI_PKT_CMD_RC     + 0x80)
+#define NCSI_PKT_RSP_ECNT      (NCSI_PKT_CMD_ECNT   + 0x80)
+#define NCSI_PKT_RSP_DCNT      (NCSI_PKT_CMD_DCNT   + 0x80)
+#define NCSI_PKT_RSP_AE                (NCSI_PKT_CMD_AE     + 0x80)
+#define NCSI_PKT_RSP_SL                (NCSI_PKT_CMD_SL     + 0x80)
+#define NCSI_PKT_RSP_GLS       (NCSI_PKT_CMD_GLS    + 0x80)
+#define NCSI_PKT_RSP_SVF       (NCSI_PKT_CMD_SVF    + 0x80)
+#define NCSI_PKT_RSP_EV                (NCSI_PKT_CMD_EV     + 0x80)
+#define NCSI_PKT_RSP_DV                (NCSI_PKT_CMD_DV     + 0x80)
+#define NCSI_PKT_RSP_SMA       (NCSI_PKT_CMD_SMA    + 0x80)
+#define NCSI_PKT_RSP_EBF       (NCSI_PKT_CMD_EBF    + 0x80)
+#define NCSI_PKT_RSP_DBF       (NCSI_PKT_CMD_DBF    + 0x80)
+#define NCSI_PKT_RSP_EGMF      (NCSI_PKT_CMD_EGMF   + 0x80)
+#define NCSI_PKT_RSP_DGMF      (NCSI_PKT_CMD_DGMF   + 0x80)
+#define NCSI_PKT_RSP_SNFC      (NCSI_PKT_CMD_SNFC   + 0x80)
+#define NCSI_PKT_RSP_GVI       (NCSI_PKT_CMD_GVI    + 0x80)
+#define NCSI_PKT_RSP_GC                (NCSI_PKT_CMD_GC     + 0x80)
+#define NCSI_PKT_RSP_GP                (NCSI_PKT_CMD_GP     + 0x80)
+#define NCSI_PKT_RSP_GCPS      (NCSI_PKT_CMD_GCPS   + 0x80)
+#define NCSI_PKT_RSP_GNS       (NCSI_PKT_CMD_GNS    + 0x80)
+#define NCSI_PKT_RSP_GNPTS     (NCSI_PKT_CMD_GNPTS  + 0x80)
+#define NCSI_PKT_RSP_GPS       (NCSI_PKT_CMD_GPS    + 0x80)
+#define NCSI_PKT_RSP_OEM       (NCSI_PKT_CMD_OEM    + 0x80)
+#define NCSI_PKT_RSP_PLDM      (NCSI_PKT_CMD_PLDM   + 0x80)
+#define NCSI_PKT_RSP_GPUUID    (NCSI_PKT_CMD_GPUUID + 0x80)
+
+/* NCSI response code/reason */
+#define NCSI_PKT_RSP_C_COMPLETED       0x0000 /* Command Completed        */
+#define NCSI_PKT_RSP_C_FAILED          0x0001 /* Command Failed           */
+#define NCSI_PKT_RSP_C_UNAVAILABLE     0x0002 /* Command Unavailable      */
+#define NCSI_PKT_RSP_C_UNSUPPORTED     0x0003 /* Command Unsupported      */
+#define NCSI_PKT_RSP_R_NO_ERROR                0x0000 /* No Error                 */
+#define NCSI_PKT_RSP_R_INTERFACE       0x0001 /* Interface not ready      */
+#define NCSI_PKT_RSP_R_PARAM           0x0002 /* Invalid Parameter        */
+#define NCSI_PKT_RSP_R_CHANNEL         0x0003 /* Channel not Ready        */
+#define NCSI_PKT_RSP_R_PACKAGE         0x0004 /* Package not Ready        */
+#define NCSI_PKT_RSP_R_LENGTH          0x0005 /* Invalid payload length   */
+#define NCSI_PKT_RSP_R_UNKNOWN         0x7fff /* Command type unsupported */
+
+/* NCSI AEN packet type */
+#define NCSI_PKT_AEN           0xFF /* AEN Packet               */
+#define NCSI_PKT_AEN_LSC       0x00 /* Link status change       */
+#define NCSI_PKT_AEN_CR                0x01 /* Configuration required   */
+#define NCSI_PKT_AEN_HNCDSC    0x02 /* HNC driver status change */
+
+#endif /* __NCSI_PKT_H__ */
diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
new file mode 100644 (file)
index 0000000..af84389
--- /dev/null
@@ -0,0 +1,1035 @@
+/*
+ * Copyright Gavin Shan, IBM Corporation 2016.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+
+#include <net/ncsi.h>
+#include <net/net_namespace.h>
+#include <net/sock.h>
+
+#include "internal.h"
+#include "ncsi-pkt.h"
+
+static int ncsi_validate_rsp_pkt(struct ncsi_request *nr,
+                                unsigned short payload)
+{
+       struct ncsi_rsp_pkt_hdr *h;
+       u32 checksum;
+       __be32 *pchecksum;
+
+       /* Check NCSI packet header. We don't need validate
+        * the packet type, which should have been checked
+        * before calling this function.
+        */
+       h = (struct ncsi_rsp_pkt_hdr *)skb_network_header(nr->rsp);
+       if (h->common.revision != NCSI_PKT_REVISION)
+               return -EINVAL;
+       if (ntohs(h->common.length) != payload)
+               return -EINVAL;
+
+       /* Check on code and reason */
+       if (ntohs(h->code) != NCSI_PKT_RSP_C_COMPLETED ||
+           ntohs(h->reason) != NCSI_PKT_RSP_R_NO_ERROR)
+               return -EINVAL;
+
+       /* Validate checksum, which might be zeroes if the
+        * sender doesn't support checksum according to NCSI
+        * specification.
+        */
+       pchecksum = (__be32 *)((void *)(h + 1) + payload - 4);
+       if (ntohl(*pchecksum) == 0)
+               return 0;
+
+       checksum = ncsi_calculate_checksum((unsigned char *)h,
+                                          sizeof(*h) + payload - 4);
+       if (*pchecksum != htonl(checksum))
+               return -EINVAL;
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_cis(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_package *np;
+       struct ncsi_channel *nc;
+       unsigned char id;
+
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel, &np, &nc);
+       if (!nc) {
+               if (ndp->flags & NCSI_DEV_PROBED)
+                       return -ENXIO;
+
+               id = NCSI_CHANNEL_INDEX(rsp->rsp.common.channel);
+               nc = ncsi_add_channel(np, id);
+       }
+
+       return nc ? 0 : -ENODEV;
+}
+
+static int ncsi_rsp_handler_sp(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_package *np;
+       unsigned char id;
+
+       /* Add the package if it's not existing. Otherwise,
+        * to change the state of its child channels.
+        */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     &np, NULL);
+       if (!np) {
+               if (ndp->flags & NCSI_DEV_PROBED)
+                       return -ENXIO;
+
+               id = NCSI_PACKAGE_INDEX(rsp->rsp.common.channel);
+               np = ncsi_add_package(ndp, id);
+               if (!np)
+                       return -ENODEV;
+       }
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_dp(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_package *np;
+       struct ncsi_channel *nc;
+       unsigned long flags;
+
+       /* Find the package */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     &np, NULL);
+       if (!np)
+               return -ENODEV;
+
+       /* Change state of all channels attached to the package */
+       NCSI_FOR_EACH_CHANNEL(np, nc) {
+               spin_lock_irqsave(&nc->lock, flags);
+               nc->state = NCSI_CHANNEL_INACTIVE;
+               spin_unlock_irqrestore(&nc->lock, flags);
+       }
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_ec(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       ncm = &nc->modes[NCSI_MODE_ENABLE];
+       if (ncm->enable)
+               return -EBUSY;
+
+       ncm->enable = 1;
+       return 0;
+}
+
+static int ncsi_rsp_handler_dc(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+       int ret;
+
+       ret = ncsi_validate_rsp_pkt(nr, 4);
+       if (ret)
+               return ret;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       ncm = &nc->modes[NCSI_MODE_ENABLE];
+       if (!ncm->enable)
+               return -EBUSY;
+
+       ncm->enable = 0;
+       return 0;
+}
+
+static int ncsi_rsp_handler_rc(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       unsigned long flags;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Update state for the specified channel */
+       spin_lock_irqsave(&nc->lock, flags);
+       nc->state = NCSI_CHANNEL_INACTIVE;
+       spin_unlock_irqrestore(&nc->lock, flags);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_ecnt(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       ncm = &nc->modes[NCSI_MODE_TX_ENABLE];
+       if (ncm->enable)
+               return -EBUSY;
+
+       ncm->enable = 1;
+       return 0;
+}
+
+static int ncsi_rsp_handler_dcnt(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       ncm = &nc->modes[NCSI_MODE_TX_ENABLE];
+       if (!ncm->enable)
+               return -EBUSY;
+
+       ncm->enable = 1;
+       return 0;
+}
+
+static int ncsi_rsp_handler_ae(struct ncsi_request *nr)
+{
+       struct ncsi_cmd_ae_pkt *cmd;
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Check if the AEN has been enabled */
+       ncm = &nc->modes[NCSI_MODE_AEN];
+       if (ncm->enable)
+               return -EBUSY;
+
+       /* Update to AEN configuration */
+       cmd = (struct ncsi_cmd_ae_pkt *)skb_network_header(nr->cmd);
+       ncm->enable = 1;
+       ncm->data[0] = cmd->mc_id;
+       ncm->data[1] = ntohl(cmd->mode);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_sl(struct ncsi_request *nr)
+{
+       struct ncsi_cmd_sl_pkt *cmd;
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       cmd = (struct ncsi_cmd_sl_pkt *)skb_network_header(nr->cmd);
+       ncm = &nc->modes[NCSI_MODE_LINK];
+       ncm->data[0] = ntohl(cmd->mode);
+       ncm->data[1] = ntohl(cmd->oem_mode);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_gls(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_gls_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+       unsigned long flags;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_gls_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       ncm = &nc->modes[NCSI_MODE_LINK];
+       ncm->data[2] = ntohl(rsp->status);
+       ncm->data[3] = ntohl(rsp->other);
+       ncm->data[4] = ntohl(rsp->oem_status);
+
+       if (nr->driven)
+               return 0;
+
+       /* Reset the channel monitor if it has been enabled */
+       spin_lock_irqsave(&nc->lock, flags);
+       nc->timeout = 0;
+       spin_unlock_irqrestore(&nc->lock, flags);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_svf(struct ncsi_request *nr)
+{
+       struct ncsi_cmd_svf_pkt *cmd;
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_filter *ncf;
+       unsigned short vlan;
+       int ret;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       cmd = (struct ncsi_cmd_svf_pkt *)skb_network_header(nr->cmd);
+       ncf = nc->filters[NCSI_FILTER_VLAN];
+       if (!ncf)
+               return -ENOENT;
+       if (cmd->index >= ncf->total)
+               return -ERANGE;
+
+       /* Add or remove the VLAN filter */
+       if (!(cmd->enable & 0x1)) {
+               ret = ncsi_remove_filter(nc, NCSI_FILTER_VLAN, cmd->index);
+       } else {
+               vlan = ntohs(cmd->vlan);
+               ret = ncsi_add_filter(nc, NCSI_FILTER_VLAN, &vlan);
+       }
+
+       return ret;
+}
+
+static int ncsi_rsp_handler_ev(struct ncsi_request *nr)
+{
+       struct ncsi_cmd_ev_pkt *cmd;
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Check if VLAN mode has been enabled */
+       ncm = &nc->modes[NCSI_MODE_VLAN];
+       if (ncm->enable)
+               return -EBUSY;
+
+       /* Update to VLAN mode */
+       cmd = (struct ncsi_cmd_ev_pkt *)skb_network_header(nr->cmd);
+       ncm->enable = 1;
+       ncm->data[0] = ntohl(cmd->mode);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_dv(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Check if VLAN mode has been enabled */
+       ncm = &nc->modes[NCSI_MODE_VLAN];
+       if (!ncm->enable)
+               return -EBUSY;
+
+       /* Update to VLAN mode */
+       ncm->enable = 0;
+       return 0;
+}
+
+static int ncsi_rsp_handler_sma(struct ncsi_request *nr)
+{
+       struct ncsi_cmd_sma_pkt *cmd;
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_filter *ncf;
+       void *bitmap;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* According to NCSI spec 1.01, the mixed filter table
+        * isn't supported yet.
+        */
+       cmd = (struct ncsi_cmd_sma_pkt *)skb_network_header(nr->cmd);
+       switch (cmd->at_e >> 5) {
+       case 0x0:       /* UC address */
+               ncf = nc->filters[NCSI_FILTER_UC];
+               break;
+       case 0x1:       /* MC address */
+               ncf = nc->filters[NCSI_FILTER_MC];
+               break;
+       default:
+               return -EINVAL;
+       }
+
+       /* Sanity check on the filter */
+       if (!ncf)
+               return -ENOENT;
+       else if (cmd->index >= ncf->total)
+               return -ERANGE;
+
+       bitmap = &ncf->bitmap;
+       if (cmd->at_e & 0x1) {
+               if (test_and_set_bit(cmd->index, bitmap))
+                       return -EBUSY;
+               memcpy(ncf->data + 6 * cmd->index, cmd->mac, 6);
+       } else {
+               if (!test_and_clear_bit(cmd->index, bitmap))
+                       return -EBUSY;
+
+               memset(ncf->data + 6 * cmd->index, 0, 6);
+       }
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_ebf(struct ncsi_request *nr)
+{
+       struct ncsi_cmd_ebf_pkt *cmd;
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       /* Find the package and channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel, NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Check if broadcast filter has been enabled */
+       ncm = &nc->modes[NCSI_MODE_BC];
+       if (ncm->enable)
+               return -EBUSY;
+
+       /* Update to broadcast filter mode */
+       cmd = (struct ncsi_cmd_ebf_pkt *)skb_network_header(nr->cmd);
+       ncm->enable = 1;
+       ncm->data[0] = ntohl(cmd->mode);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_dbf(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Check if broadcast filter isn't enabled */
+       ncm = &nc->modes[NCSI_MODE_BC];
+       if (!ncm->enable)
+               return -EBUSY;
+
+       /* Update to broadcast filter mode */
+       ncm->enable = 0;
+       ncm->data[0] = 0;
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_egmf(struct ncsi_request *nr)
+{
+       struct ncsi_cmd_egmf_pkt *cmd;
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       /* Find the channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Check if multicast filter has been enabled */
+       ncm = &nc->modes[NCSI_MODE_MC];
+       if (ncm->enable)
+               return -EBUSY;
+
+       /* Update to multicast filter mode */
+       cmd = (struct ncsi_cmd_egmf_pkt *)skb_network_header(nr->cmd);
+       ncm->enable = 1;
+       ncm->data[0] = ntohl(cmd->mode);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_dgmf(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Check if multicast filter has been enabled */
+       ncm = &nc->modes[NCSI_MODE_MC];
+       if (!ncm->enable)
+               return -EBUSY;
+
+       /* Update to multicast filter mode */
+       ncm->enable = 0;
+       ncm->data[0] = 0;
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_snfc(struct ncsi_request *nr)
+{
+       struct ncsi_cmd_snfc_pkt *cmd;
+       struct ncsi_rsp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_mode *ncm;
+
+       /* Find the channel */
+       rsp = (struct ncsi_rsp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Check if flow control has been enabled */
+       ncm = &nc->modes[NCSI_MODE_FC];
+       if (ncm->enable)
+               return -EBUSY;
+
+       /* Update to flow control mode */
+       cmd = (struct ncsi_cmd_snfc_pkt *)skb_network_header(nr->cmd);
+       ncm->enable = 1;
+       ncm->data[0] = cmd->mode;
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_gvi(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_gvi_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_version *ncv;
+       int i;
+
+       /* Find the channel */
+       rsp = (struct ncsi_rsp_gvi_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Update to channel's version info */
+       ncv = &nc->version;
+       ncv->version = ntohl(rsp->ncsi_version);
+       ncv->alpha2 = rsp->alpha2;
+       memcpy(ncv->fw_name, rsp->fw_name, 12);
+       ncv->fw_version = ntohl(rsp->fw_version);
+       for (i = 0; i < ARRAY_SIZE(ncv->pci_ids); i++)
+               ncv->pci_ids[i] = ntohs(rsp->pci_ids[i]);
+       ncv->mf_id = ntohl(rsp->mf_id);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_gc(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_gc_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_filter *ncf;
+       size_t size, entry_size;
+       int cnt, i;
+
+       /* Find the channel */
+       rsp = (struct ncsi_rsp_gc_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Update channel's capabilities */
+       nc->caps[NCSI_CAP_GENERIC].cap = ntohl(rsp->cap) &
+                                        NCSI_CAP_GENERIC_MASK;
+       nc->caps[NCSI_CAP_BC].cap = ntohl(rsp->bc_cap) &
+                                   NCSI_CAP_BC_MASK;
+       nc->caps[NCSI_CAP_MC].cap = ntohl(rsp->mc_cap) &
+                                   NCSI_CAP_MC_MASK;
+       nc->caps[NCSI_CAP_BUFFER].cap = ntohl(rsp->buf_cap);
+       nc->caps[NCSI_CAP_AEN].cap = ntohl(rsp->aen_cap) &
+                                    NCSI_CAP_AEN_MASK;
+       nc->caps[NCSI_CAP_VLAN].cap = rsp->vlan_mode &
+                                     NCSI_CAP_VLAN_MASK;
+
+       /* Build filters */
+       for (i = 0; i < NCSI_FILTER_MAX; i++) {
+               switch (i) {
+               case NCSI_FILTER_VLAN:
+                       cnt = rsp->vlan_cnt;
+                       entry_size = 2;
+                       break;
+               case NCSI_FILTER_MIXED:
+                       cnt = rsp->mixed_cnt;
+                       entry_size = 6;
+                       break;
+               case NCSI_FILTER_MC:
+                       cnt = rsp->mc_cnt;
+                       entry_size = 6;
+                       break;
+               case NCSI_FILTER_UC:
+                       cnt = rsp->uc_cnt;
+                       entry_size = 6;
+                       break;
+               default:
+                       continue;
+               }
+
+               if (!cnt || nc->filters[i])
+                       continue;
+
+               size = sizeof(*ncf) + cnt * entry_size;
+               ncf = kzalloc(size, GFP_ATOMIC);
+               if (!ncf) {
+                       pr_warn("%s: Cannot alloc filter table (%d)\n",
+                               __func__, i);
+                       return -ENOMEM;
+               }
+
+               ncf->index = i;
+               ncf->total = cnt;
+               ncf->bitmap = 0x0ul;
+               nc->filters[i] = ncf;
+       }
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_gp(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_gp_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       unsigned short enable, vlan;
+       unsigned char *pdata;
+       int table, i;
+
+       /* Find the channel */
+       rsp = (struct ncsi_rsp_gp_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Modes with explicit enabled indications */
+       if (ntohl(rsp->valid_modes) & 0x1) {    /* BC filter mode */
+               nc->modes[NCSI_MODE_BC].enable = 1;
+               nc->modes[NCSI_MODE_BC].data[0] = ntohl(rsp->bc_mode);
+       }
+       if (ntohl(rsp->valid_modes) & 0x2)      /* Channel enabled */
+               nc->modes[NCSI_MODE_ENABLE].enable = 1;
+       if (ntohl(rsp->valid_modes) & 0x4)      /* Channel Tx enabled */
+               nc->modes[NCSI_MODE_TX_ENABLE].enable = 1;
+       if (ntohl(rsp->valid_modes) & 0x8)      /* MC filter mode */
+               nc->modes[NCSI_MODE_MC].enable = 1;
+
+       /* Modes without explicit enabled indications */
+       nc->modes[NCSI_MODE_LINK].enable = 1;
+       nc->modes[NCSI_MODE_LINK].data[0] = ntohl(rsp->link_mode);
+       nc->modes[NCSI_MODE_VLAN].enable = 1;
+       nc->modes[NCSI_MODE_VLAN].data[0] = rsp->vlan_mode;
+       nc->modes[NCSI_MODE_FC].enable = 1;
+       nc->modes[NCSI_MODE_FC].data[0] = rsp->fc_mode;
+       nc->modes[NCSI_MODE_AEN].enable = 1;
+       nc->modes[NCSI_MODE_AEN].data[0] = ntohl(rsp->aen_mode);
+
+       /* MAC addresses filter table */
+       pdata = (unsigned char *)rsp + 48;
+       enable = rsp->mac_enable;
+       for (i = 0; i < rsp->mac_cnt; i++, pdata += 6) {
+               if (i >= (nc->filters[NCSI_FILTER_UC]->total +
+                         nc->filters[NCSI_FILTER_MC]->total))
+                       table = NCSI_FILTER_MIXED;
+               else if (i >= nc->filters[NCSI_FILTER_UC]->total)
+                       table = NCSI_FILTER_MC;
+               else
+                       table = NCSI_FILTER_UC;
+
+               if (!(enable & (0x1 << i)))
+                       continue;
+
+               if (ncsi_find_filter(nc, table, pdata) >= 0)
+                       continue;
+
+               ncsi_add_filter(nc, table, pdata);
+       }
+
+       /* VLAN filter table */
+       enable = ntohs(rsp->vlan_enable);
+       for (i = 0; i < rsp->vlan_cnt; i++, pdata += 2) {
+               if (!(enable & (0x1 << i)))
+                       continue;
+
+               vlan = ntohs(*(__be16 *)pdata);
+               if (ncsi_find_filter(nc, NCSI_FILTER_VLAN, &vlan) >= 0)
+                       continue;
+
+               ncsi_add_filter(nc, NCSI_FILTER_VLAN, &vlan);
+       }
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_gcps(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_gcps_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_stats *ncs;
+
+       /* Find the channel */
+       rsp = (struct ncsi_rsp_gcps_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Update HNC's statistics */
+       ncs = &nc->stats;
+       ncs->hnc_cnt_hi         = ntohl(rsp->cnt_hi);
+       ncs->hnc_cnt_lo         = ntohl(rsp->cnt_lo);
+       ncs->hnc_rx_bytes       = ntohl(rsp->rx_bytes);
+       ncs->hnc_tx_bytes       = ntohl(rsp->tx_bytes);
+       ncs->hnc_rx_uc_pkts     = ntohl(rsp->rx_uc_pkts);
+       ncs->hnc_rx_mc_pkts     = ntohl(rsp->rx_mc_pkts);
+       ncs->hnc_rx_bc_pkts     = ntohl(rsp->rx_bc_pkts);
+       ncs->hnc_tx_uc_pkts     = ntohl(rsp->tx_uc_pkts);
+       ncs->hnc_tx_mc_pkts     = ntohl(rsp->tx_mc_pkts);
+       ncs->hnc_tx_bc_pkts     = ntohl(rsp->tx_bc_pkts);
+       ncs->hnc_fcs_err        = ntohl(rsp->fcs_err);
+       ncs->hnc_align_err      = ntohl(rsp->align_err);
+       ncs->hnc_false_carrier  = ntohl(rsp->false_carrier);
+       ncs->hnc_runt_pkts      = ntohl(rsp->runt_pkts);
+       ncs->hnc_jabber_pkts    = ntohl(rsp->jabber_pkts);
+       ncs->hnc_rx_pause_xon   = ntohl(rsp->rx_pause_xon);
+       ncs->hnc_rx_pause_xoff  = ntohl(rsp->rx_pause_xoff);
+       ncs->hnc_tx_pause_xon   = ntohl(rsp->tx_pause_xon);
+       ncs->hnc_tx_pause_xoff  = ntohl(rsp->tx_pause_xoff);
+       ncs->hnc_tx_s_collision = ntohl(rsp->tx_s_collision);
+       ncs->hnc_tx_m_collision = ntohl(rsp->tx_m_collision);
+       ncs->hnc_l_collision    = ntohl(rsp->l_collision);
+       ncs->hnc_e_collision    = ntohl(rsp->e_collision);
+       ncs->hnc_rx_ctl_frames  = ntohl(rsp->rx_ctl_frames);
+       ncs->hnc_rx_64_frames   = ntohl(rsp->rx_64_frames);
+       ncs->hnc_rx_127_frames  = ntohl(rsp->rx_127_frames);
+       ncs->hnc_rx_255_frames  = ntohl(rsp->rx_255_frames);
+       ncs->hnc_rx_511_frames  = ntohl(rsp->rx_511_frames);
+       ncs->hnc_rx_1023_frames = ntohl(rsp->rx_1023_frames);
+       ncs->hnc_rx_1522_frames = ntohl(rsp->rx_1522_frames);
+       ncs->hnc_rx_9022_frames = ntohl(rsp->rx_9022_frames);
+       ncs->hnc_tx_64_frames   = ntohl(rsp->tx_64_frames);
+       ncs->hnc_tx_127_frames  = ntohl(rsp->tx_127_frames);
+       ncs->hnc_tx_255_frames  = ntohl(rsp->tx_255_frames);
+       ncs->hnc_tx_511_frames  = ntohl(rsp->tx_511_frames);
+       ncs->hnc_tx_1023_frames = ntohl(rsp->tx_1023_frames);
+       ncs->hnc_tx_1522_frames = ntohl(rsp->tx_1522_frames);
+       ncs->hnc_tx_9022_frames = ntohl(rsp->tx_9022_frames);
+       ncs->hnc_rx_valid_bytes = ntohl(rsp->rx_valid_bytes);
+       ncs->hnc_rx_runt_pkts   = ntohl(rsp->rx_runt_pkts);
+       ncs->hnc_rx_jabber_pkts = ntohl(rsp->rx_jabber_pkts);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_gns(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_gns_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_stats *ncs;
+
+       /* Find the channel */
+       rsp = (struct ncsi_rsp_gns_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Update HNC's statistics */
+       ncs = &nc->stats;
+       ncs->ncsi_rx_cmds       = ntohl(rsp->rx_cmds);
+       ncs->ncsi_dropped_cmds  = ntohl(rsp->dropped_cmds);
+       ncs->ncsi_cmd_type_errs = ntohl(rsp->cmd_type_errs);
+       ncs->ncsi_cmd_csum_errs = ntohl(rsp->cmd_csum_errs);
+       ncs->ncsi_rx_pkts       = ntohl(rsp->rx_pkts);
+       ncs->ncsi_tx_pkts       = ntohl(rsp->tx_pkts);
+       ncs->ncsi_tx_aen_pkts   = ntohl(rsp->tx_aen_pkts);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_gnpts(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_gnpts_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_channel *nc;
+       struct ncsi_channel_stats *ncs;
+
+       /* Find the channel */
+       rsp = (struct ncsi_rsp_gnpts_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     NULL, &nc);
+       if (!nc)
+               return -ENODEV;
+
+       /* Update HNC's statistics */
+       ncs = &nc->stats;
+       ncs->pt_tx_pkts        = ntohl(rsp->tx_pkts);
+       ncs->pt_tx_dropped     = ntohl(rsp->tx_dropped);
+       ncs->pt_tx_channel_err = ntohl(rsp->tx_channel_err);
+       ncs->pt_tx_us_err      = ntohl(rsp->tx_us_err);
+       ncs->pt_rx_pkts        = ntohl(rsp->rx_pkts);
+       ncs->pt_rx_dropped     = ntohl(rsp->rx_dropped);
+       ncs->pt_rx_channel_err = ntohl(rsp->rx_channel_err);
+       ncs->pt_rx_us_err      = ntohl(rsp->rx_us_err);
+       ncs->pt_rx_os_err      = ntohl(rsp->rx_os_err);
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_gps(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_gps_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_package *np;
+
+       /* Find the package */
+       rsp = (struct ncsi_rsp_gps_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     &np, NULL);
+       if (!np)
+               return -ENODEV;
+
+       return 0;
+}
+
+static int ncsi_rsp_handler_gpuuid(struct ncsi_request *nr)
+{
+       struct ncsi_rsp_gpuuid_pkt *rsp;
+       struct ncsi_dev_priv *ndp = nr->ndp;
+       struct ncsi_package *np;
+
+       /* Find the package */
+       rsp = (struct ncsi_rsp_gpuuid_pkt *)skb_network_header(nr->rsp);
+       ncsi_find_package_and_channel(ndp, rsp->rsp.common.channel,
+                                     &np, NULL);
+       if (!np)
+               return -ENODEV;
+
+       memcpy(np->uuid, rsp->uuid, sizeof(rsp->uuid));
+
+       return 0;
+}
+
+static struct ncsi_rsp_handler {
+       unsigned char   type;
+       int             payload;
+       int             (*handler)(struct ncsi_request *nr);
+} ncsi_rsp_handlers[] = {
+       { NCSI_PKT_RSP_CIS,     4, ncsi_rsp_handler_cis     },
+       { NCSI_PKT_RSP_SP,      4, ncsi_rsp_handler_sp      },
+       { NCSI_PKT_RSP_DP,      4, ncsi_rsp_handler_dp      },
+       { NCSI_PKT_RSP_EC,      4, ncsi_rsp_handler_ec      },
+       { NCSI_PKT_RSP_DC,      4, ncsi_rsp_handler_dc      },
+       { NCSI_PKT_RSP_RC,      4, ncsi_rsp_handler_rc      },
+       { NCSI_PKT_RSP_ECNT,    4, ncsi_rsp_handler_ecnt    },
+       { NCSI_PKT_RSP_DCNT,    4, ncsi_rsp_handler_dcnt    },
+       { NCSI_PKT_RSP_AE,      4, ncsi_rsp_handler_ae      },
+       { NCSI_PKT_RSP_SL,      4, ncsi_rsp_handler_sl      },
+       { NCSI_PKT_RSP_GLS,    16, ncsi_rsp_handler_gls     },
+       { NCSI_PKT_RSP_SVF,     4, ncsi_rsp_handler_svf     },
+       { NCSI_PKT_RSP_EV,      4, ncsi_rsp_handler_ev      },
+       { NCSI_PKT_RSP_DV,      4, ncsi_rsp_handler_dv      },
+       { NCSI_PKT_RSP_SMA,     4, ncsi_rsp_handler_sma     },
+       { NCSI_PKT_RSP_EBF,     4, ncsi_rsp_handler_ebf     },
+       { NCSI_PKT_RSP_DBF,     4, ncsi_rsp_handler_dbf     },
+       { NCSI_PKT_RSP_EGMF,    4, ncsi_rsp_handler_egmf    },
+       { NCSI_PKT_RSP_DGMF,    4, ncsi_rsp_handler_dgmf    },
+       { NCSI_PKT_RSP_SNFC,    4, ncsi_rsp_handler_snfc    },
+       { NCSI_PKT_RSP_GVI,    36, ncsi_rsp_handler_gvi     },
+       { NCSI_PKT_RSP_GC,     32, ncsi_rsp_handler_gc      },
+       { NCSI_PKT_RSP_GP,     -1, ncsi_rsp_handler_gp      },
+       { NCSI_PKT_RSP_GCPS,  172, ncsi_rsp_handler_gcps    },
+       { NCSI_PKT_RSP_GNS,   172, ncsi_rsp_handler_gns     },
+       { NCSI_PKT_RSP_GNPTS, 172, ncsi_rsp_handler_gnpts   },
+       { NCSI_PKT_RSP_GPS,     8, ncsi_rsp_handler_gps     },
+       { NCSI_PKT_RSP_OEM,     0, NULL                     },
+       { NCSI_PKT_RSP_PLDM,    0, NULL                     },
+       { NCSI_PKT_RSP_GPUUID, 20, ncsi_rsp_handler_gpuuid  }
+};
+
+int ncsi_rcv_rsp(struct sk_buff *skb, struct net_device *dev,
+                struct packet_type *pt, struct net_device *orig_dev)
+{
+       struct ncsi_rsp_handler *nrh = NULL;
+       struct ncsi_dev *nd;
+       struct ncsi_dev_priv *ndp;
+       struct ncsi_request *nr;
+       struct ncsi_pkt_hdr *hdr;
+       unsigned long flags;
+       int payload, i, ret;
+
+       /* Find the NCSI device */
+       nd = ncsi_find_dev(dev);
+       ndp = nd ? TO_NCSI_DEV_PRIV(nd) : NULL;
+       if (!ndp)
+               return -ENODEV;
+
+       /* Check if it is AEN packet */
+       hdr = (struct ncsi_pkt_hdr *)skb_network_header(skb);
+       if (hdr->type == NCSI_PKT_AEN)
+               return ncsi_aen_handler(ndp, skb);
+
+       /* Find the handler */
+       for (i = 0; i < ARRAY_SIZE(ncsi_rsp_handlers); i++) {
+               if (ncsi_rsp_handlers[i].type == hdr->type) {
+                       if (ncsi_rsp_handlers[i].handler)
+                               nrh = &ncsi_rsp_handlers[i];
+                       else
+                               nrh = NULL;
+
+                       break;
+               }
+       }
+
+       if (!nrh) {
+               netdev_err(nd->dev, "Received unrecognized packet (0x%x)\n",
+                          hdr->type);
+               return -ENOENT;
+       }
+
+       /* Associate with the request */
+       spin_lock_irqsave(&ndp->lock, flags);
+       nr = &ndp->requests[hdr->id];
+       if (!nr->used) {
+               spin_unlock_irqrestore(&ndp->lock, flags);
+               return -ENODEV;
+       }
+
+       nr->rsp = skb;
+       if (!nr->enabled) {
+               spin_unlock_irqrestore(&ndp->lock, flags);
+               ret = -ENOENT;
+               goto out;
+       }
+
+       /* Validate the packet */
+       spin_unlock_irqrestore(&ndp->lock, flags);
+       payload = nrh->payload;
+       if (payload < 0)
+               payload = ntohs(hdr->length);
+       ret = ncsi_validate_rsp_pkt(nr, payload);
+       if (ret)
+               goto out;
+
+       /* Process the packet */
+       ret = nrh->handler(nr);
+out:
+       ncsi_free_request(nr);
+       return ret;
+}
index 803001a..1b07578 100644 (file)
@@ -1545,7 +1545,8 @@ error:
 /*
  *      Set up receiving multicast socket over UDP
  */
-static struct socket *make_receive_sock(struct netns_ipvs *ipvs, int id)
+static struct socket *make_receive_sock(struct netns_ipvs *ipvs, int id,
+                                       int ifindex)
 {
        /* multicast addr */
        union ipvs_sockaddr mcast_addr;
@@ -1566,6 +1567,7 @@ static struct socket *make_receive_sock(struct netns_ipvs *ipvs, int id)
                set_sock_size(sock->sk, 0, result);
 
        get_mcast_sockaddr(&mcast_addr, &salen, &ipvs->bcfg, id);
+       sock->sk->sk_bound_dev_if = ifindex;
        result = sock->ops->bind(sock, (struct sockaddr *)&mcast_addr, salen);
        if (result < 0) {
                pr_err("Error binding to the multicast addr\n");
@@ -1868,7 +1870,7 @@ int start_sync_thread(struct netns_ipvs *ipvs, struct ipvs_sync_daemon_cfg *c,
                if (state == IP_VS_STATE_MASTER)
                        sock = make_send_sock(ipvs, id);
                else
-                       sock = make_receive_sock(ipvs, id);
+                       sock = make_receive_sock(ipvs, id, dev->ifindex);
                if (IS_ERR(sock)) {
                        result = PTR_ERR(sock);
                        goto outtinfo;
index 2d46225..9198e69 100644 (file)
@@ -657,6 +657,7 @@ static int nf_ct_resolve_clash(struct net *net, struct sk_buff *skb,
 
        l4proto = __nf_ct_l4proto_find(nf_ct_l3num(ct), nf_ct_protonum(ct));
        if (l4proto->allow_clash &&
+           !nfct_nat(ct) &&
            !nf_ct_is_dying(ct) &&
            atomic_inc_not_zero(&ct->ct_general.use)) {
                nf_ct_acct_merge(ct, ctinfo, (struct nf_conn *)skb->nfct);
@@ -1600,8 +1601,15 @@ void *nf_ct_alloc_hashtable(unsigned int *sizep, int nulls)
        unsigned int nr_slots, i;
        size_t sz;
 
+       if (*sizep > (UINT_MAX / sizeof(struct hlist_nulls_head)))
+               return NULL;
+
        BUILD_BUG_ON(sizeof(struct hlist_nulls_head) != sizeof(struct hlist_head));
        nr_slots = *sizep = roundup(*sizep, PAGE_SIZE / sizeof(struct hlist_nulls_head));
+
+       if (nr_slots > (UINT_MAX / sizeof(struct hlist_nulls_head)))
+               return NULL;
+
        sz = nr_slots * sizeof(struct hlist_nulls_head);
        hash = (void *)__get_free_pages(GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO,
                                        get_order(sz));
index 13d50e7..7e1c876 100644 (file)
@@ -1705,9 +1705,11 @@ struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
 
        err = nf_tables_newexpr(ctx, &info, expr);
        if (err < 0)
-               goto err2;
+               goto err3;
 
        return expr;
+err3:
+       kfree(expr);
 err2:
        module_put(info.ops->type->owner);
 err1:
index 2f47d5d..51e180f 100644 (file)
@@ -54,7 +54,6 @@ static void nft_ct_get_eval(const struct nft_expr *expr,
        const struct nf_conn_help *help;
        const struct nf_conntrack_tuple *tuple;
        const struct nf_conntrack_helper *helper;
-       long diff;
        unsigned int state;
 
        ct = nf_ct_get(pkt->skb, &ctinfo);
@@ -94,10 +93,7 @@ static void nft_ct_get_eval(const struct nft_expr *expr,
                return;
 #endif
        case NFT_CT_EXPIRATION:
-               diff = (long)jiffies - (long)ct->timeout.expires;
-               if (diff < 0)
-                       diff = 0;
-               *dest = jiffies_to_msecs(diff);
+               *dest = jiffies_to_msecs(nf_ct_expires(ct));
                return;
        case NFT_CT_HELPER:
                if (ct->master == NULL)
index 03e5e33..2863f34 100644 (file)
@@ -220,7 +220,7 @@ void nft_meta_set_eval(const struct nft_expr *expr,
                        skb->pkt_type = value;
                break;
        case NFT_META_NFTRACE:
-               skb->nf_trace = 1;
+               skb->nf_trace = !!value;
                break;
        default:
                WARN_ON(1);
index dd9003f..0fd5518 100644 (file)
@@ -30,6 +30,9 @@
 
 #define DIGITAL_PROTO_ISO15693_RF_TECH NFC_PROTO_ISO15693_MASK
 
+/* Delay between each poll frame (ms) */
+#define DIGITAL_POLL_INTERVAL 10
+
 struct digital_cmd {
        struct list_head queue;
 
@@ -173,6 +176,8 @@ static void digital_wq_cmd(struct work_struct *work)
                return;
        }
 
+       cmd->pending = 1;
+
        mutex_unlock(&ddev->cmd_lock);
 
        if (cmd->req)
@@ -419,7 +424,8 @@ void digital_poll_next_tech(struct nfc_digital_dev *ddev)
 
        mutex_unlock(&ddev->poll_lock);
 
-       schedule_work(&ddev->poll_work);
+       schedule_delayed_work(&ddev->poll_work,
+                             msecs_to_jiffies(DIGITAL_POLL_INTERVAL));
 }
 
 static void digital_wq_poll(struct work_struct *work)
@@ -428,7 +434,7 @@ static void digital_wq_poll(struct work_struct *work)
        struct digital_poll_tech *poll_tech;
        struct nfc_digital_dev *ddev = container_of(work,
                                                    struct nfc_digital_dev,
-                                                   poll_work);
+                                                   poll_work.work);
        mutex_lock(&ddev->poll_lock);
 
        if (!ddev->poll_tech_count) {
@@ -543,7 +549,7 @@ static int digital_start_poll(struct nfc_dev *nfc_dev, __u32 im_protocols,
                return -EINVAL;
        }
 
-       schedule_work(&ddev->poll_work);
+       schedule_delayed_work(&ddev->poll_work, 0);
 
        return 0;
 }
@@ -564,7 +570,7 @@ static void digital_stop_poll(struct nfc_dev *nfc_dev)
 
        mutex_unlock(&ddev->poll_lock);
 
-       cancel_work_sync(&ddev->poll_work);
+       cancel_delayed_work_sync(&ddev->poll_work);
 
        digital_abort_cmd(ddev);
 }
@@ -606,6 +612,8 @@ static int digital_dep_link_down(struct nfc_dev *nfc_dev)
 {
        struct nfc_digital_dev *ddev = nfc_get_drvdata(nfc_dev);
 
+       digital_abort_cmd(ddev);
+
        ddev->curr_protocol = 0;
 
        return 0;
@@ -770,7 +778,7 @@ struct nfc_digital_dev *nfc_digital_allocate_device(struct nfc_digital_ops *ops,
        INIT_WORK(&ddev->cmd_complete_work, digital_wq_cmd_complete);
 
        mutex_init(&ddev->poll_lock);
-       INIT_WORK(&ddev->poll_work, digital_wq_poll);
+       INIT_DELAYED_WORK(&ddev->poll_work, digital_wq_poll);
 
        if (supported_protocols & NFC_PROTO_JEWEL_MASK)
                ddev->protocols |= NFC_PROTO_JEWEL_MASK;
@@ -832,12 +840,20 @@ void nfc_digital_unregister_device(struct nfc_digital_dev *ddev)
        ddev->poll_tech_count = 0;
        mutex_unlock(&ddev->poll_lock);
 
-       cancel_work_sync(&ddev->poll_work);
+       cancel_delayed_work_sync(&ddev->poll_work);
        cancel_work_sync(&ddev->cmd_work);
        cancel_work_sync(&ddev->cmd_complete_work);
 
        list_for_each_entry_safe(cmd, n, &ddev->cmd_queue, queue) {
                list_del(&cmd->queue);
+
+               /* Call the command callback if any and pass it a ENODEV error.
+                * This gives a chance to the command issuer to free any
+                * allocated buffer.
+                */
+               if (cmd->cmd_cb)
+                       cmd->cmd_cb(ddev, cmd->cb_context, ERR_PTR(-ENODEV));
+
                kfree(cmd->mdaa_params);
                kfree(cmd);
        }
index f72be74..f864ce1 100644 (file)
@@ -35,6 +35,8 @@
 #define DIGITAL_ATR_REQ_MIN_SIZE 16
 #define DIGITAL_ATR_REQ_MAX_SIZE 64
 
+#define DIGITAL_ATR_RES_TO_WT(s)       ((s) & 0xF)
+
 #define DIGITAL_DID_MAX        14
 
 #define DIGITAL_PAYLOAD_SIZE_MAX       254
@@ -63,6 +65,9 @@
 #define DIGITAL_NFC_DEP_DID_BIT_SET(pfb) ((pfb) & DIGITAL_NFC_DEP_PFB_DID_BIT)
 #define DIGITAL_NFC_DEP_PFB_PNI(pfb)     ((pfb) & 0x03)
 
+#define DIGITAL_NFC_DEP_RTOX_VALUE(data) ((data) & 0x3F)
+#define DIGITAL_NFC_DEP_RTOX_MAX        59
+
 #define DIGITAL_NFC_DEP_PFB_I_PDU          0x00
 #define DIGITAL_NFC_DEP_PFB_ACK_NACK_PDU   0x40
 #define DIGITAL_NFC_DEP_PFB_SUPERVISOR_PDU 0x80
@@ -122,6 +127,37 @@ static const u8 digital_payload_bits_map[4] = {
        [3] = 254
 };
 
+/* Response Waiting Time for ATR_RES PDU in ms
+ *
+ * RWT(ATR_RES) = RWT(nfcdep,activation) + dRWT(nfcdep) + dT(nfcdep,initiator)
+ *
+ * with:
+ *  RWT(nfcdep,activation) = 4096 * 2^12 / f(c) s
+ *  dRWT(nfcdep) = 16 / f(c) s
+ *  dT(nfcdep,initiator) = 100 ms
+ *  f(c) = 13560000 Hz
+ */
+#define DIGITAL_ATR_RES_RWT 1337
+
+/* Response Waiting Time for other DEP PDUs in ms
+ *
+ * max_rwt = rwt + dRWT(nfcdep) + dT(nfcdep,initiator)
+ *
+ * with:
+ *  rwt = (256 * 16 / f(c)) * 2^wt s
+ *  dRWT(nfcdep) = 16 / f(c) s
+ *  dT(nfcdep,initiator) = 100 ms
+ *  f(c) = 13560000 Hz
+ *  0 <= wt <= 14 (given by the target by the TO field of ATR_RES response)
+ */
+#define DIGITAL_NFC_DEP_IN_MAX_WT 14
+#define DIGITAL_NFC_DEP_TG_MAX_WT 8
+static const u16 digital_rwt_map[DIGITAL_NFC_DEP_IN_MAX_WT + 1] = {
+       100,  101,  101,  102,  105,
+       110,  119,  139,  177,  255,
+       409,  719, 1337, 2575, 5049,
+};
+
 static u8 digital_payload_bits_to_size(u8 payload_bits)
 {
        if (payload_bits >= ARRAY_SIZE(digital_payload_bits_map))
@@ -190,8 +226,6 @@ digital_send_dep_data_prep(struct nfc_digital_dev *ddev, struct sk_buff *skb,
                        return ERR_PTR(-ENOMEM);
                }
 
-               skb_reserve(new_skb, ddev->tx_headroom + NFC_HEADER_SIZE +
-                                       DIGITAL_NFC_DEP_REQ_RES_HEADROOM);
                memcpy(skb_put(new_skb, ddev->remote_payload_max), skb->data,
                       ddev->remote_payload_max);
                skb_pull(skb, ddev->remote_payload_max);
@@ -368,8 +402,8 @@ static int digital_in_send_psl_req(struct nfc_digital_dev *ddev,
 
        ddev->skb_add_crc(skb);
 
-       rc = digital_in_send_cmd(ddev, skb, 500, digital_in_recv_psl_res,
-                                target);
+       rc = digital_in_send_cmd(ddev, skb, ddev->dep_rwt,
+                                digital_in_recv_psl_res, target);
        if (rc)
                kfree_skb(skb);
 
@@ -382,6 +416,7 @@ static void digital_in_recv_atr_res(struct nfc_digital_dev *ddev, void *arg,
        struct nfc_target *target = arg;
        struct digital_atr_res *atr_res;
        u8 gb_len, payload_bits;
+       u8 wt;
        int rc;
 
        if (IS_ERR(resp)) {
@@ -411,6 +446,11 @@ static void digital_in_recv_atr_res(struct nfc_digital_dev *ddev, void *arg,
 
        atr_res = (struct digital_atr_res *)resp->data;
 
+       wt = DIGITAL_ATR_RES_TO_WT(atr_res->to);
+       if (wt > DIGITAL_NFC_DEP_IN_MAX_WT)
+               wt = DIGITAL_NFC_DEP_IN_MAX_WT;
+       ddev->dep_rwt = digital_rwt_map[wt];
+
        payload_bits = DIGITAL_PAYLOAD_PP_TO_BITS(atr_res->pp);
        ddev->remote_payload_max = digital_payload_bits_to_size(payload_bits);
 
@@ -492,8 +532,8 @@ int digital_in_send_atr_req(struct nfc_digital_dev *ddev,
 
        ddev->skb_add_crc(skb);
 
-       rc = digital_in_send_cmd(ddev, skb, 500, digital_in_recv_atr_res,
-                                target);
+       rc = digital_in_send_cmd(ddev, skb, DIGITAL_ATR_RES_RWT,
+                                digital_in_recv_atr_res, target);
        if (rc)
                kfree_skb(skb);
 
@@ -524,11 +564,10 @@ static int digital_in_send_ack(struct nfc_digital_dev *ddev,
 
        ddev->skb_add_crc(skb);
 
-       ddev->saved_skb = skb_get(skb);
-       ddev->saved_skb_len = skb->len;
+       ddev->saved_skb = pskb_copy(skb, GFP_KERNEL);
 
-       rc = digital_in_send_cmd(ddev, skb, 1500, digital_in_recv_dep_res,
-                                data_exch);
+       rc = digital_in_send_cmd(ddev, skb, ddev->dep_rwt,
+                                digital_in_recv_dep_res, data_exch);
        if (rc) {
                kfree_skb(skb);
                kfree_skb(ddev->saved_skb);
@@ -562,8 +601,8 @@ static int digital_in_send_nack(struct nfc_digital_dev *ddev,
 
        ddev->skb_add_crc(skb);
 
-       rc = digital_in_send_cmd(ddev, skb, 1500, digital_in_recv_dep_res,
-                                data_exch);
+       rc = digital_in_send_cmd(ddev, skb, ddev->dep_rwt,
+                                digital_in_recv_dep_res, data_exch);
        if (rc)
                kfree_skb(skb);
 
@@ -593,8 +632,8 @@ static int digital_in_send_atn(struct nfc_digital_dev *ddev,
 
        ddev->skb_add_crc(skb);
 
-       rc = digital_in_send_cmd(ddev, skb, 1500, digital_in_recv_dep_res,
-                                data_exch);
+       rc = digital_in_send_cmd(ddev, skb, ddev->dep_rwt,
+                                digital_in_recv_dep_res, data_exch);
        if (rc)
                kfree_skb(skb);
 
@@ -607,6 +646,11 @@ static int digital_in_send_rtox(struct nfc_digital_dev *ddev,
        struct digital_dep_req_res *dep_req;
        struct sk_buff *skb;
        int rc;
+       u16 rwt_int;
+
+       rwt_int = ddev->dep_rwt * rtox;
+       if (rwt_int > digital_rwt_map[DIGITAL_NFC_DEP_IN_MAX_WT])
+               rwt_int = digital_rwt_map[DIGITAL_NFC_DEP_IN_MAX_WT];
 
        skb = digital_skb_alloc(ddev, 1);
        if (!skb)
@@ -627,16 +671,10 @@ static int digital_in_send_rtox(struct nfc_digital_dev *ddev,
 
        ddev->skb_add_crc(skb);
 
-       ddev->saved_skb = skb_get(skb);
-       ddev->saved_skb_len = skb->len;
-
-       rc = digital_in_send_cmd(ddev, skb, 1500, digital_in_recv_dep_res,
-                                data_exch);
-       if (rc) {
+       rc = digital_in_send_cmd(ddev, skb, rwt_int,
+                                digital_in_recv_dep_res, data_exch);
+       if (rc)
                kfree_skb(skb);
-               kfree_skb(ddev->saved_skb);
-               ddev->saved_skb = NULL;
-       }
 
        return rc;
 }
@@ -644,11 +682,19 @@ static int digital_in_send_rtox(struct nfc_digital_dev *ddev,
 static int digital_in_send_saved_skb(struct nfc_digital_dev *ddev,
                                     struct digital_data_exch *data_exch)
 {
+       int rc;
+
+       if (!ddev->saved_skb)
+               return -EINVAL;
+
        skb_get(ddev->saved_skb);
-       skb_push(ddev->saved_skb, ddev->saved_skb_len);
 
-       return digital_in_send_cmd(ddev, ddev->saved_skb, 1500,
-                                  digital_in_recv_dep_res, data_exch);
+       rc = digital_in_send_cmd(ddev, ddev->saved_skb, ddev->dep_rwt,
+                                digital_in_recv_dep_res, data_exch);
+       if (rc)
+               kfree_skb(ddev->saved_skb);
+
+       return rc;
 }
 
 static void digital_in_recv_dep_res(struct nfc_digital_dev *ddev, void *arg,
@@ -659,12 +705,13 @@ static void digital_in_recv_dep_res(struct nfc_digital_dev *ddev, void *arg,
        u8 pfb;
        uint size;
        int rc;
+       u8 rtox;
 
        if (IS_ERR(resp)) {
                rc = PTR_ERR(resp);
                resp = NULL;
 
-               if (((rc != -ETIMEDOUT) || ddev->nack_count) &&
+               if ((rc == -EIO || (rc == -ETIMEDOUT && ddev->nack_count)) &&
                    (ddev->nack_count++ < DIGITAL_NFC_DEP_N_RETRY_NACK)) {
                        ddev->atn_count = 0;
 
@@ -783,6 +830,12 @@ static void digital_in_recv_dep_res(struct nfc_digital_dev *ddev, void *arg,
                break;
 
        case DIGITAL_NFC_DEP_PFB_ACK_NACK_PDU:
+               if (DIGITAL_NFC_DEP_NACK_BIT_SET(pfb)) {
+                       PROTOCOL_ERR("14.12.4.5");
+                       rc = -EIO;
+                       goto exit;
+               }
+
                if (DIGITAL_NFC_DEP_PFB_PNI(pfb) != ddev->curr_nfc_dep_pni) {
                        PROTOCOL_ERR("14.12.3.3");
                        rc = -EIO;
@@ -792,43 +845,53 @@ static void digital_in_recv_dep_res(struct nfc_digital_dev *ddev, void *arg,
                ddev->curr_nfc_dep_pni =
                        DIGITAL_NFC_DEP_PFB_PNI(ddev->curr_nfc_dep_pni + 1);
 
-               if (ddev->chaining_skb && !DIGITAL_NFC_DEP_NACK_BIT_SET(pfb)) {
-                       kfree_skb(ddev->saved_skb);
-                       ddev->saved_skb = NULL;
+               if (!ddev->chaining_skb) {
+                       PROTOCOL_ERR("14.12.4.3");
+                       rc = -EIO;
+                       goto exit;
+               }
 
-                       rc = digital_in_send_dep_req(ddev, NULL,
-                                                    ddev->chaining_skb,
-                                                    ddev->data_exch);
-                       if (rc)
-                               goto error;
+               /* The initiator has received a valid ACK. Free the last sent
+                * PDU and keep on sending chained skb.
+                */
+               kfree_skb(ddev->saved_skb);
+               ddev->saved_skb = NULL;
 
-                       return;
-               }
+               rc = digital_in_send_dep_req(ddev, NULL,
+                                            ddev->chaining_skb,
+                                            ddev->data_exch);
+               if (rc)
+                       goto error;
 
-               pr_err("Received a ACK/NACK PDU\n");
-               rc = -EINVAL;
-               goto exit;
+               goto free_resp;
 
        case DIGITAL_NFC_DEP_PFB_SUPERVISOR_PDU:
                if (!DIGITAL_NFC_DEP_PFB_IS_TIMEOUT(pfb)) { /* ATN */
                        rc = digital_in_send_saved_skb(ddev, data_exch);
-                       if (rc) {
-                               kfree_skb(ddev->saved_skb);
+                       if (rc)
                                goto error;
-                       }
 
-                       return;
+                       goto free_resp;
                }
 
-               kfree_skb(ddev->saved_skb);
-               ddev->saved_skb = NULL;
+               if (ddev->atn_count || ddev->nack_count) {
+                       PROTOCOL_ERR("14.12.4.4");
+                       rc = -EIO;
+                       goto error;
+               }
+
+               rtox = DIGITAL_NFC_DEP_RTOX_VALUE(resp->data[0]);
+               if (!rtox || rtox > DIGITAL_NFC_DEP_RTOX_MAX) {
+                       PROTOCOL_ERR("14.8.4.1");
+                       rc = -EIO;
+                       goto error;
+               }
 
-               rc = digital_in_send_rtox(ddev, data_exch, resp->data[0]);
+               rc = digital_in_send_rtox(ddev, data_exch, rtox);
                if (rc)
                        goto error;
 
-               kfree_skb(resp);
-               return;
+               goto free_resp;
        }
 
 exit:
@@ -845,6 +908,11 @@ error:
 
        if (rc)
                kfree_skb(resp);
+
+       return;
+
+free_resp:
+       dev_kfree_skb(resp);
 }
 
 int digital_in_send_dep_req(struct nfc_digital_dev *ddev,
@@ -876,11 +944,10 @@ int digital_in_send_dep_req(struct nfc_digital_dev *ddev,
 
        ddev->skb_add_crc(tmp_skb);
 
-       ddev->saved_skb = skb_get(tmp_skb);
-       ddev->saved_skb_len = tmp_skb->len;
+       ddev->saved_skb = pskb_copy(tmp_skb, GFP_KERNEL);
 
-       rc = digital_in_send_cmd(ddev, tmp_skb, 1500, digital_in_recv_dep_res,
-                                data_exch);
+       rc = digital_in_send_cmd(ddev, tmp_skb, ddev->dep_rwt,
+                                digital_in_recv_dep_res, data_exch);
        if (rc) {
                if (tmp_skb != skb)
                        kfree_skb(tmp_skb);
@@ -956,8 +1023,7 @@ static int digital_tg_send_ack(struct nfc_digital_dev *ddev,
 
        ddev->skb_add_crc(skb);
 
-       ddev->saved_skb = skb_get(skb);
-       ddev->saved_skb_len = skb->len;
+       ddev->saved_skb = pskb_copy(skb, GFP_KERNEL);
 
        rc = digital_tg_send_cmd(ddev, skb, 1500, digital_tg_recv_dep_req,
                                 data_exch);
@@ -1009,11 +1075,19 @@ static int digital_tg_send_atn(struct nfc_digital_dev *ddev)
 
 static int digital_tg_send_saved_skb(struct nfc_digital_dev *ddev)
 {
+       int rc;
+
+       if (!ddev->saved_skb)
+               return -EINVAL;
+
        skb_get(ddev->saved_skb);
-       skb_push(ddev->saved_skb, ddev->saved_skb_len);
 
-       return digital_tg_send_cmd(ddev, ddev->saved_skb, 1500,
-                                  digital_tg_recv_dep_req, NULL);
+       rc = digital_tg_send_cmd(ddev, ddev->saved_skb, 1500,
+                                digital_tg_recv_dep_req, NULL);
+       if (rc)
+               kfree_skb(ddev->saved_skb);
+
+       return rc;
 }
 
 static void digital_tg_recv_dep_req(struct nfc_digital_dev *ddev, void *arg,
@@ -1086,22 +1160,38 @@ static void digital_tg_recv_dep_req(struct nfc_digital_dev *ddev, void *arg,
        case DIGITAL_NFC_DEP_PFB_I_PDU:
                pr_debug("DIGITAL_NFC_DEP_PFB_I_PDU\n");
 
-               if ((ddev->atn_count && (DIGITAL_NFC_DEP_PFB_PNI(pfb - 1) !=
-                                               ddev->curr_nfc_dep_pni)) ||
-                   (DIGITAL_NFC_DEP_PFB_PNI(pfb) != ddev->curr_nfc_dep_pni)) {
-                       PROTOCOL_ERR("14.12.3.4");
-                       rc = -EIO;
-                       goto exit;
-               }
-
                if (ddev->atn_count) {
+                       /* The target has received (and replied to) at least one
+                        * ATN DEP_REQ.
+                        */
                        ddev->atn_count = 0;
 
-                       rc = digital_tg_send_saved_skb(ddev);
-                       if (rc)
-                               goto exit;
+                       /* pni of resp PDU equal to the target current pni - 1
+                        * means resp is the previous DEP_REQ PDU received from
+                        * the initiator so the target replies with saved_skb
+                        * which is the previous DEP_RES saved in
+                        * digital_tg_send_dep_res().
+                        */
+                       if (DIGITAL_NFC_DEP_PFB_PNI(pfb) ==
+                         DIGITAL_NFC_DEP_PFB_PNI(ddev->curr_nfc_dep_pni - 1)) {
+                               rc = digital_tg_send_saved_skb(ddev);
+                               if (rc)
+                                       goto exit;
 
-                       return;
+                               goto free_resp;
+                       }
+
+                       /* atn_count > 0 and PDU pni != curr_nfc_dep_pni - 1
+                        * means the target probably did not received the last
+                        * DEP_REQ PDU sent by the initiator. The target
+                        * fallbacks to normal processing then.
+                        */
+               }
+
+               if (DIGITAL_NFC_DEP_PFB_PNI(pfb) != ddev->curr_nfc_dep_pni) {
+                       PROTOCOL_ERR("14.12.3.4");
+                       rc = -EIO;
+                       goto exit;
                }
 
                kfree_skb(ddev->saved_skb);
@@ -1125,51 +1215,64 @@ static void digital_tg_recv_dep_req(struct nfc_digital_dev *ddev, void *arg,
                rc = 0;
                break;
        case DIGITAL_NFC_DEP_PFB_ACK_NACK_PDU:
-               if (!DIGITAL_NFC_DEP_NACK_BIT_SET(pfb)) { /* ACK */
-                       if ((ddev->atn_count &&
-                            (DIGITAL_NFC_DEP_PFB_PNI(pfb - 1) !=
-                                               ddev->curr_nfc_dep_pni)) ||
-                           (DIGITAL_NFC_DEP_PFB_PNI(pfb) !=
-                                               ddev->curr_nfc_dep_pni) ||
-                           !ddev->chaining_skb || !ddev->saved_skb) {
+               if (DIGITAL_NFC_DEP_NACK_BIT_SET(pfb)) { /* NACK */
+                       if (DIGITAL_NFC_DEP_PFB_PNI(pfb + 1) !=
+                                               ddev->curr_nfc_dep_pni) {
                                rc = -EIO;
                                goto exit;
                        }
 
-                       if (ddev->atn_count) {
-                               ddev->atn_count = 0;
+                       ddev->atn_count = 0;
 
+                       rc = digital_tg_send_saved_skb(ddev);
+                       if (rc)
+                               goto exit;
+
+                       goto free_resp;
+               }
+
+               /* ACK */
+               if (ddev->atn_count) {
+                       /* The target has previously recevied one or more ATN
+                        * PDUs.
+                        */
+                       ddev->atn_count = 0;
+
+                       /* If the ACK PNI is equal to the target PNI - 1 means
+                        * that the initiator did not receive the previous PDU
+                        * sent by the target so re-send it.
+                        */
+                       if (DIGITAL_NFC_DEP_PFB_PNI(pfb + 1) ==
+                                               ddev->curr_nfc_dep_pni) {
                                rc = digital_tg_send_saved_skb(ddev);
                                if (rc)
                                        goto exit;
 
-                               return;
+                               goto free_resp;
                        }
 
-                       kfree_skb(ddev->saved_skb);
-                       ddev->saved_skb = NULL;
+                       /* Otherwise, the target did not receive the previous
+                        * ACK PDU from the initiator. Fallback to normal
+                        * processing of chained PDU then.
+                        */
+               }
 
-                       rc = digital_tg_send_dep_res(ddev, ddev->chaining_skb);
-                       if (rc)
-                               goto exit;
-               } else { /* NACK */
-                       if ((DIGITAL_NFC_DEP_PFB_PNI(pfb + 1) !=
-                                               ddev->curr_nfc_dep_pni) ||
-                           !ddev->saved_skb) {
-                               rc = -EIO;
-                               goto exit;
-                       }
+               /* Keep on sending chained PDU */
+               if (!ddev->chaining_skb ||
+                   DIGITAL_NFC_DEP_PFB_PNI(pfb) !=
+                                       ddev->curr_nfc_dep_pni) {
+                       rc = -EIO;
+                       goto exit;
+               }
 
-                       ddev->atn_count = 0;
+               kfree_skb(ddev->saved_skb);
+               ddev->saved_skb = NULL;
 
-                       rc = digital_tg_send_saved_skb(ddev);
-                       if (rc) {
-                               kfree_skb(ddev->saved_skb);
-                               goto exit;
-                       }
-               }
+               rc = digital_tg_send_dep_res(ddev, ddev->chaining_skb);
+               if (rc)
+                       goto exit;
 
-               return;
+               goto free_resp;
        case DIGITAL_NFC_DEP_PFB_SUPERVISOR_PDU:
                if (DIGITAL_NFC_DEP_PFB_IS_TIMEOUT(pfb)) {
                        rc = -EINVAL;
@@ -1182,8 +1285,7 @@ static void digital_tg_recv_dep_req(struct nfc_digital_dev *ddev, void *arg,
 
                ddev->atn_count++;
 
-               kfree_skb(resp);
-               return;
+               goto free_resp;
        }
 
        rc = nfc_tm_data_received(ddev->nfc_dev, resp);
@@ -1199,6 +1301,11 @@ exit:
 
        if (rc)
                kfree_skb(resp);
+
+       return;
+
+free_resp:
+       dev_kfree_skb(resp);
 }
 
 int digital_tg_send_dep_res(struct nfc_digital_dev *ddev, struct sk_buff *skb)
@@ -1235,8 +1342,7 @@ int digital_tg_send_dep_res(struct nfc_digital_dev *ddev, struct sk_buff *skb)
 
        ddev->skb_add_crc(tmp_skb);
 
-       ddev->saved_skb = skb_get(tmp_skb);
-       ddev->saved_skb_len = tmp_skb->len;
+       ddev->saved_skb = pskb_copy(tmp_skb, GFP_KERNEL);
 
        rc = digital_tg_send_cmd(ddev, tmp_skb, 1500, digital_tg_recv_dep_req,
                                 NULL);
@@ -1420,7 +1526,7 @@ static int digital_tg_send_atr_res(struct nfc_digital_dev *ddev,
        atr_res->dir = DIGITAL_NFC_DEP_FRAME_DIR_IN;
        atr_res->cmd = DIGITAL_CMD_ATR_RES;
        memcpy(atr_res->nfcid3, atr_req->nfcid3, sizeof(atr_req->nfcid3));
-       atr_res->to = 8;
+       atr_res->to = DIGITAL_NFC_DEP_TG_MAX_WT;
 
        ddev->local_payload_max = DIGITAL_PAYLOAD_SIZE_MAX;
        payload_bits = digital_payload_size_to_bits(ddev->local_payload_max);
index fb58ed2..d9080de 100644 (file)
@@ -1257,21 +1257,12 @@ static int digital_tg_config_nfcf(struct nfc_digital_dev *ddev, u8 rf_tech)
 int digital_tg_listen_nfcf(struct nfc_digital_dev *ddev, u8 rf_tech)
 {
        int rc;
-       u8 *nfcid2;
 
        rc = digital_tg_config_nfcf(ddev, rf_tech);
        if (rc)
                return rc;
 
-       nfcid2 = kzalloc(NFC_NFCID2_MAXSIZE, GFP_KERNEL);
-       if (!nfcid2)
-               return -ENOMEM;
-
-       nfcid2[0] = DIGITAL_SENSF_NFCID2_NFC_DEP_B1;
-       nfcid2[1] = DIGITAL_SENSF_NFCID2_NFC_DEP_B2;
-       get_random_bytes(nfcid2 + 2, NFC_NFCID2_MAXSIZE - 2);
-
-       return digital_tg_listen(ddev, 300, digital_tg_recv_sensf_req, nfcid2);
+       return digital_tg_listen(ddev, 300, digital_tg_recv_sensf_req, NULL);
 }
 
 void digital_tg_recv_md_req(struct nfc_digital_dev *ddev, void *arg,
index 1399a03..3d699cb 100644 (file)
@@ -133,36 +133,29 @@ void nfc_llc_free(struct nfc_llc *llc)
        kfree(llc);
 }
 
-inline void nfc_llc_get_rx_head_tail_room(struct nfc_llc *llc, int *rx_headroom,
-                                         int *rx_tailroom)
-{
-       *rx_headroom = llc->rx_headroom;
-       *rx_tailroom = llc->rx_tailroom;
-}
-
-inline int nfc_llc_start(struct nfc_llc *llc)
+int nfc_llc_start(struct nfc_llc *llc)
 {
        return llc->ops->start(llc);
 }
 EXPORT_SYMBOL(nfc_llc_start);
 
-inline int nfc_llc_stop(struct nfc_llc *llc)
+int nfc_llc_stop(struct nfc_llc *llc)
 {
        return llc->ops->stop(llc);
 }
 EXPORT_SYMBOL(nfc_llc_stop);
 
-inline void nfc_llc_rcv_from_drv(struct nfc_llc *llc, struct sk_buff *skb)
+void nfc_llc_rcv_from_drv(struct nfc_llc *llc, struct sk_buff *skb)
 {
        llc->ops->rcv_from_drv(llc, skb);
 }
 
-inline int nfc_llc_xmit_from_hci(struct nfc_llc *llc, struct sk_buff *skb)
+int nfc_llc_xmit_from_hci(struct nfc_llc *llc, struct sk_buff *skb)
 {
        return llc->ops->xmit_from_hci(llc, skb);
 }
 
-inline void *nfc_llc_get_data(struct nfc_llc *llc)
+void *nfc_llc_get_data(struct nfc_llc *llc)
 {
        return llc->data;
 }
index 3425532..c5959ce 100644 (file)
@@ -438,19 +438,17 @@ int nfc_llcp_send_connect(struct nfc_llcp_sock *sock)
                goto error_tlv;
        }
 
-       if (service_name_tlv != NULL)
-               skb = llcp_add_tlv(skb, service_name_tlv,
-                                  service_name_tlv_length);
-
-       skb = llcp_add_tlv(skb, miux_tlv, miux_tlv_length);
-       skb = llcp_add_tlv(skb, rw_tlv, rw_tlv_length);
+       llcp_add_tlv(skb, service_name_tlv, service_name_tlv_length);
+       llcp_add_tlv(skb, miux_tlv, miux_tlv_length);
+       llcp_add_tlv(skb, rw_tlv, rw_tlv_length);
 
        skb_queue_tail(&local->tx_queue, skb);
 
-       return 0;
+       err = 0;
 
 error_tlv:
-       pr_err("error %d\n", err);
+       if (err)
+               pr_err("error %d\n", err);
 
        kfree(service_name_tlv);
        kfree(miux_tlv);
@@ -493,15 +491,16 @@ int nfc_llcp_send_cc(struct nfc_llcp_sock *sock)
                goto error_tlv;
        }
 
-       skb = llcp_add_tlv(skb, miux_tlv, miux_tlv_length);
-       skb = llcp_add_tlv(skb, rw_tlv, rw_tlv_length);
+       llcp_add_tlv(skb, miux_tlv, miux_tlv_length);
+       llcp_add_tlv(skb, rw_tlv, rw_tlv_length);
 
        skb_queue_tail(&local->tx_queue, skb);
 
-       return 0;
+       err = 0;
 
 error_tlv:
-       pr_err("error %d\n", err);
+       if (err)
+               pr_err("error %d\n", err);
 
        kfree(miux_tlv);
        kfree(rw_tlv);
index 9887627..e69786c 100644 (file)
@@ -732,9 +732,8 @@ static void nfc_llcp_tx_work(struct work_struct *work)
                        int ret;
 
                        pr_debug("Sending pending skb\n");
-                       print_hex_dump(KERN_DEBUG, "LLCP Tx: ",
-                                      DUMP_PREFIX_OFFSET, 16, 1,
-                                      skb->data, skb->len, true);
+                       print_hex_dump_debug("LLCP Tx: ", DUMP_PREFIX_OFFSET,
+                                            16, 1, skb->data, skb->len, true);
 
                        if (ptype == LLCP_PDU_DISC && sk != NULL &&
                            sk->sk_state == LLCP_DISCONNECTING) {
@@ -1412,8 +1411,8 @@ static void nfc_llcp_rx_skb(struct nfc_llcp_local *local, struct sk_buff *skb)
        pr_debug("ptype 0x%x dsap 0x%x ssap 0x%x\n", ptype, dsap, ssap);
 
        if (ptype != LLCP_PDU_SYMM)
-               print_hex_dump(KERN_DEBUG, "LLCP Rx: ", DUMP_PREFIX_OFFSET,
-                              16, 1, skb->data, skb->len, true);
+               print_hex_dump_debug("LLCP Rx: ", DUMP_PREFIX_OFFSET, 16, 1,
+                                    skb->data, skb->len, true);
 
        switch (ptype) {
        case LLCP_PDU_SYMM:
index 9d92c4c..33a4697 100644 (file)
@@ -1923,13 +1923,11 @@ retry:
                goto out_unlock;
        }
 
-       sockc.tsflags = 0;
+       sockc.tsflags = sk->sk_tsflags;
        if (msg->msg_controllen) {
                err = sock_cmsg_send(sk, msg, &sockc);
-               if (unlikely(err)) {
-                       err = -EINVAL;
+               if (unlikely(err))
                        goto out_unlock;
-               }
        }
 
        skb->protocol = proto;
@@ -2642,7 +2640,7 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
                dev = dev_get_by_index(sock_net(&po->sk), saddr->sll_ifindex);
        }
 
-       sockc.tsflags = 0;
+       sockc.tsflags = po->sk.sk_tsflags;
        if (msg->msg_controllen) {
                err = sock_cmsg_send(&po->sk, msg, &sockc);
                if (unlikely(err))
@@ -2845,7 +2843,7 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
        if (unlikely(!(dev->flags & IFF_UP)))
                goto out_unlock;
 
-       sockc.tsflags = 0;
+       sockc.tsflags = sk->sk_tsflags;
        sockc.mark = sk->sk_mark;
        if (msg->msg_controllen) {
                err = sock_cmsg_send(sk, msg, &sockc);
index b22ea95..095f6ce 100644 (file)
@@ -81,6 +81,8 @@ static int rds_add_bound(struct rds_sock *rs, __be32 addr, __be16 *port)
 
        if (*port != 0) {
                rover = be16_to_cpu(*port);
+               if (rover == RDS_FLAG_PROBE_PORT)
+                       return -EINVAL;
                last = rover;
        } else {
                rover = max_t(u16, prandom_u32(), 2);
@@ -91,12 +93,16 @@ static int rds_add_bound(struct rds_sock *rs, __be32 addr, __be16 *port)
                if (rover == 0)
                        rover++;
 
+               if (rover == RDS_FLAG_PROBE_PORT)
+                       continue;
                key = ((u64)addr << 32) | cpu_to_be16(rover);
                if (rhashtable_lookup_fast(&bind_hash_table, &key, ht_parms))
                        continue;
 
                rs->rs_bound_key = key;
                rs->rs_bound_addr = addr;
+               net_get_random_once(&rs->rs_hash_initval,
+                                   sizeof(rs->rs_hash_initval));
                rs->rs_bound_port = cpu_to_be16(rover);
                rs->rs_bound_node.next = NULL;
                rds_sock_addref(rs);
index 19a4fee..f505855 100644 (file)
@@ -155,7 +155,7 @@ static struct rds_connection *__rds_conn_create(struct net *net,
        struct hlist_head *head = rds_conn_bucket(laddr, faddr);
        struct rds_transport *loop_trans;
        unsigned long flags;
-       int ret;
+       int ret, i;
 
        rcu_read_lock();
        conn = rds_conn_lookup(net, head, laddr, faddr, trans);
@@ -211,6 +211,12 @@ static struct rds_connection *__rds_conn_create(struct net *net,
 
        conn->c_trans = trans;
 
+       init_waitqueue_head(&conn->c_hs_waitq);
+       for (i = 0; i < RDS_MPATH_WORKERS; i++) {
+               __rds_conn_path_init(conn, &conn->c_path[i],
+                                    is_outgoing);
+               conn->c_path[i].cp_index = i;
+       }
        ret = trans->conn_alloc(conn, gfp);
        if (ret) {
                kmem_cache_free(rds_conn_slab, conn);
@@ -263,14 +269,6 @@ static struct rds_connection *__rds_conn_create(struct net *net,
                        kmem_cache_free(rds_conn_slab, conn);
                        conn = found;
                } else {
-                       int i;
-
-                       for (i = 0; i < RDS_MPATH_WORKERS; i++) {
-                               __rds_conn_path_init(conn, &conn->c_path[i],
-                                                    is_outgoing);
-                               conn->c_path[i].cp_index = i;
-                       }
-
                        hlist_add_head_rcu(&conn->c_hash_node, head);
                        rds_cong_add_conn(conn);
                        rds_conn_count++;
@@ -668,6 +666,7 @@ EXPORT_SYMBOL_GPL(rds_conn_path_drop);
 
 void rds_conn_drop(struct rds_connection *conn)
 {
+       WARN_ON(conn->c_trans->t_mp_capable);
        rds_conn_path_drop(&conn->c_path[0]);
 }
 EXPORT_SYMBOL_GPL(rds_conn_drop);
index 756c737..6cb9106 100644 (file)
@@ -41,6 +41,7 @@ static unsigned int   rds_exthdr_size[__RDS_EXTHDR_MAX] = {
 [RDS_EXTHDR_VERSION]   = sizeof(struct rds_ext_header_version),
 [RDS_EXTHDR_RDMA]      = sizeof(struct rds_ext_header_rdma),
 [RDS_EXTHDR_RDMA_DEST] = sizeof(struct rds_ext_header_rdma_dest),
+[RDS_EXTHDR_NPATHS]    = sizeof(u16),
 };
 
 
index 6ef07bd..b2d17f0 100644 (file)
@@ -85,7 +85,9 @@ enum {
 #define RDS_RECV_REFILL                3
 
 /* Max number of multipaths per RDS connection. Must be a power of 2 */
-#define        RDS_MPATH_WORKERS       1
+#define        RDS_MPATH_WORKERS       8
+#define        RDS_MPATH_HASH(rs, n) (jhash_1word((rs)->rs_bound_port, \
+                              (rs)->rs_hash_initval) & ((n) - 1))
 
 /* Per mpath connection state */
 struct rds_conn_path {
@@ -131,7 +133,8 @@ struct rds_connection {
        __be32                  c_laddr;
        __be32                  c_faddr;
        unsigned int            c_loopback:1,
-                               c_pad_to_32:31;
+                               c_ping_triggered:1,
+                               c_pad_to_32:30;
        int                     c_npaths;
        struct rds_connection   *c_passive;
        struct rds_transport    *c_trans;
@@ -147,6 +150,7 @@ struct rds_connection {
        unsigned long           c_map_queued;
 
        struct rds_conn_path    c_path[RDS_MPATH_WORKERS];
+       wait_queue_head_t       c_hs_waitq; /* handshake waitq */
 };
 
 static inline
@@ -166,6 +170,17 @@ void rds_conn_net_set(struct rds_connection *conn, struct net *net)
 #define RDS_FLAG_RETRANSMITTED 0x04
 #define RDS_MAX_ADV_CREDIT     255
 
+/* RDS_FLAG_PROBE_PORT is the reserved sport used for sending a ping
+ * probe to exchange control information before establishing a connection.
+ * Currently the control information that is exchanged is the number of
+ * supported paths. If the peer is a legacy (older kernel revision) peer,
+ * it would return a pong message without additional control information
+ * that would then alert the sender that the peer was an older rev.
+ */
+#define RDS_FLAG_PROBE_PORT    1
+#define        RDS_HS_PROBE(sport, dport) \
+               ((sport == RDS_FLAG_PROBE_PORT && dport == 0) || \
+                (sport == 0 && dport == RDS_FLAG_PROBE_PORT))
 /*
  * Maximum space available for extension headers.
  */
@@ -225,6 +240,11 @@ struct rds_ext_header_rdma_dest {
        __be32                  h_rdma_offset;
 };
 
+/* Extension header announcing number of paths.
+ * Implicit length = 2 bytes.
+ */
+#define RDS_EXTHDR_NPATHS      4
+
 #define __RDS_EXTHDR_MAX       16 /* for now */
 
 struct rds_incoming {
@@ -545,6 +565,7 @@ struct rds_sock {
        /* Socket options - in case there will be more */
        unsigned char           rs_recverr,
                                rs_cong_monitor;
+       u32                     rs_hash_initval;
 };
 
 static inline struct rds_sock *rds_sk_to_rs(const struct sock *sk)
index fed53a6..cbfabdf 100644 (file)
@@ -156,6 +156,67 @@ static void rds_recv_incoming_exthdrs(struct rds_incoming *inc, struct rds_sock
        }
 }
 
+static void rds_recv_hs_exthdrs(struct rds_header *hdr,
+                               struct rds_connection *conn)
+{
+       unsigned int pos = 0, type, len;
+       union {
+               struct rds_ext_header_version version;
+               u16 rds_npaths;
+       } buffer;
+
+       while (1) {
+               len = sizeof(buffer);
+               type = rds_message_next_extension(hdr, &pos, &buffer, &len);
+               if (type == RDS_EXTHDR_NONE)
+                       break;
+               /* Process extension header here */
+               switch (type) {
+               case RDS_EXTHDR_NPATHS:
+                       conn->c_npaths = min_t(int, RDS_MPATH_WORKERS,
+                                              buffer.rds_npaths);
+                       break;
+               default:
+                       pr_warn_ratelimited("ignoring unknown exthdr type "
+                                            "0x%x\n", type);
+               }
+       }
+       /* if RDS_EXTHDR_NPATHS was not found, default to a single-path */
+       conn->c_npaths = max_t(int, conn->c_npaths, 1);
+}
+
+/* rds_start_mprds() will synchronously start multiple paths when appropriate.
+ * The scheme is based on the following rules:
+ *
+ * 1. rds_sendmsg on first connect attempt sends the probe ping, with the
+ *    sender's npaths (s_npaths)
+ * 2. rcvr of probe-ping knows the mprds_paths = min(s_npaths, r_npaths). It
+ *    sends back a probe-pong with r_npaths. After that, if rcvr is the
+ *    smaller ip addr, it starts rds_conn_path_connect_if_down on all
+ *    mprds_paths.
+ * 3. sender gets woken up, and can move to rds_conn_path_connect_if_down.
+ *    If it is the smaller ipaddr, rds_conn_path_connect_if_down can be
+ *    called after reception of the probe-pong on all mprds_paths.
+ *    Otherwise (sender of probe-ping is not the smaller ip addr): just call
+ *    rds_conn_path_connect_if_down on the hashed path. (see rule 4)
+ * 4. when cp_index > 0, rds_connect_worker must only trigger
+ *    a connection if laddr < faddr.
+ * 5. sender may end up queuing the packet on the cp. will get sent out later.
+ *    when connection is completed.
+ */
+static void rds_start_mprds(struct rds_connection *conn)
+{
+       int i;
+       struct rds_conn_path *cp;
+
+       if (conn->c_npaths > 1 && conn->c_laddr < conn->c_faddr) {
+               for (i = 1; i < conn->c_npaths; i++) {
+                       cp = &conn->c_path[i];
+                       rds_conn_path_connect_if_down(cp);
+               }
+       }
+}
+
 /*
  * The transport must make sure that this is serialized against other
  * rx and conn reset on this specific conn.
@@ -232,6 +293,20 @@ void rds_recv_incoming(struct rds_connection *conn, __be32 saddr, __be32 daddr,
                }
                rds_stats_inc(s_recv_ping);
                rds_send_pong(cp, inc->i_hdr.h_sport);
+               /* if this is a handshake ping, start multipath if necessary */
+               if (RDS_HS_PROBE(inc->i_hdr.h_sport, inc->i_hdr.h_dport)) {
+                       rds_recv_hs_exthdrs(&inc->i_hdr, cp->cp_conn);
+                       rds_start_mprds(cp->cp_conn);
+               }
+               goto out;
+       }
+
+       if (inc->i_hdr.h_dport ==  RDS_FLAG_PROBE_PORT &&
+           inc->i_hdr.h_sport == 0) {
+               rds_recv_hs_exthdrs(&inc->i_hdr, cp->cp_conn);
+               /* if this is a handshake pong, start multipath if necessary */
+               rds_start_mprds(cp->cp_conn);
+               wake_up(&cp->cp_conn->c_hs_waitq);
                goto out;
        }
 
index 5a9caf1..896626b 100644 (file)
@@ -963,6 +963,29 @@ static int rds_cmsg_send(struct rds_sock *rs, struct rds_message *rm,
        return ret;
 }
 
+static void rds_send_ping(struct rds_connection *conn);
+
+static int rds_send_mprds_hash(struct rds_sock *rs, struct rds_connection *conn)
+{
+       int hash;
+
+       if (conn->c_npaths == 0)
+               hash = RDS_MPATH_HASH(rs, RDS_MPATH_WORKERS);
+       else
+               hash = RDS_MPATH_HASH(rs, conn->c_npaths);
+       if (conn->c_npaths == 0 && hash != 0) {
+               rds_send_ping(conn);
+
+               if (conn->c_npaths == 0) {
+                       wait_event_interruptible(conn->c_hs_waitq,
+                                                (conn->c_npaths != 0));
+               }
+               if (conn->c_npaths == 1)
+                       hash = 0;
+       }
+       return hash;
+}
+
 int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
 {
        struct sock *sk = sock->sk;
@@ -1075,7 +1098,10 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
                goto out;
        }
 
-       cpath = &conn->c_path[0];
+       if (conn->c_trans->t_mp_capable)
+               cpath = &conn->c_path[rds_send_mprds_hash(rs, conn)];
+       else
+               cpath = &conn->c_path[0];
 
        rds_conn_path_connect_if_down(cpath);
 
@@ -1135,10 +1161,16 @@ out:
 }
 
 /*
- * Reply to a ping packet.
+ * send out a probe. Can be shared by rds_send_ping,
+ * rds_send_pong, rds_send_hb.
+ * rds_send_hb should use h_flags
+ *   RDS_FLAG_HB_PING|RDS_FLAG_ACK_REQUIRED
+ * or
+ *   RDS_FLAG_HB_PONG|RDS_FLAG_ACK_REQUIRED
  */
 int
-rds_send_pong(struct rds_conn_path *cp, __be16 dport)
+rds_send_probe(struct rds_conn_path *cp, __be16 sport,
+              __be16 dport, u8 h_flags)
 {
        struct rds_message *rm;
        unsigned long flags;
@@ -1166,9 +1198,18 @@ rds_send_pong(struct rds_conn_path *cp, __be16 dport)
        rm->m_inc.i_conn = cp->cp_conn;
        rm->m_inc.i_conn_path = cp;
 
-       rds_message_populate_header(&rm->m_inc.i_hdr, 0, dport,
+       rds_message_populate_header(&rm->m_inc.i_hdr, sport, dport,
                                    cp->cp_next_tx_seq);
+       rm->m_inc.i_hdr.h_flags |= h_flags;
        cp->cp_next_tx_seq++;
+
+       if (RDS_HS_PROBE(sport, dport) && cp->cp_conn->c_trans->t_mp_capable) {
+               u16 npaths = RDS_MPATH_WORKERS;
+
+               rds_message_add_extension(&rm->m_inc.i_hdr,
+                                         RDS_EXTHDR_NPATHS, &npaths,
+                                         sizeof(npaths));
+       }
        spin_unlock_irqrestore(&cp->cp_lock, flags);
 
        rds_stats_inc(s_send_queued);
@@ -1185,3 +1226,25 @@ out:
                rds_message_put(rm);
        return ret;
 }
+
+int
+rds_send_pong(struct rds_conn_path *cp, __be16 dport)
+{
+       return rds_send_probe(cp, 0, dport, 0);
+}
+
+void
+rds_send_ping(struct rds_connection *conn)
+{
+       unsigned long flags;
+       struct rds_conn_path *cp = &conn->c_path[0];
+
+       spin_lock_irqsave(&cp->cp_lock, flags);
+       if (conn->c_ping_triggered) {
+               spin_unlock_irqrestore(&cp->cp_lock, flags);
+               return;
+       }
+       conn->c_ping_triggered = 1;
+       spin_unlock_irqrestore(&cp->cp_lock, flags);
+       rds_send_probe(&conn->c_path[0], RDS_FLAG_PROBE_PORT, 0, 0);
+}
index d24f6c1..fcddacc 100644 (file)
@@ -38,7 +38,6 @@
 #include <net/net_namespace.h>
 #include <net/netns/generic.h>
 
-#include "rds_single_path.h"
 #include "rds.h"
 #include "tcp.h"
 
@@ -168,35 +167,21 @@ void rds_tcp_reset_callbacks(struct socket *sock,
        wait_event(cp->cp_waitq, !test_bit(RDS_IN_XMIT, &cp->cp_flags));
        lock_sock(osock->sk);
        /* reset receive side state for rds_tcp_data_recv() for osock  */
+       cancel_delayed_work_sync(&cp->cp_send_w);
+       cancel_delayed_work_sync(&cp->cp_recv_w);
        if (tc->t_tinc) {
                rds_inc_put(&tc->t_tinc->ti_inc);
                tc->t_tinc = NULL;
        }
        tc->t_tinc_hdr_rem = sizeof(struct rds_header);
        tc->t_tinc_data_rem = 0;
-       tc->t_sock = NULL;
-
-       write_lock_bh(&osock->sk->sk_callback_lock);
-
-       osock->sk->sk_user_data = NULL;
-       osock->sk->sk_data_ready = tc->t_orig_data_ready;
-       osock->sk->sk_write_space = tc->t_orig_write_space;
-       osock->sk->sk_state_change = tc->t_orig_state_change;
-       write_unlock_bh(&osock->sk->sk_callback_lock);
+       rds_tcp_restore_callbacks(osock, tc);
        release_sock(osock->sk);
        sock_release(osock);
 newsock:
        rds_send_path_reset(cp);
        lock_sock(sock->sk);
-       write_lock_bh(&sock->sk->sk_callback_lock);
-       tc->t_sock = sock;
-       tc->t_cpath = cp;
-       sock->sk->sk_user_data = cp;
-       sock->sk->sk_data_ready = rds_tcp_data_ready;
-       sock->sk->sk_write_space = rds_tcp_write_space;
-       sock->sk->sk_state_change = rds_tcp_state_change;
-
-       write_unlock_bh(&sock->sk->sk_callback_lock);
+       rds_tcp_set_callbacks(sock, cp);
        release_sock(sock->sk);
 }
 
@@ -372,6 +357,7 @@ struct rds_transport rds_tcp_transport = {
        .t_name                 = "tcp",
        .t_type                 = RDS_TRANS_TCP,
        .t_prefer_loopback      = 1,
+       .t_mp_capable           = 1,
 };
 
 static int rds_tcp_netid;
@@ -551,6 +537,13 @@ static void rds_tcp_kill_sock(struct net *net)
        }
 }
 
+void *rds_tcp_listen_sock_def_readable(struct net *net)
+{
+       struct rds_tcp_net *rtn = net_generic(net, rds_tcp_netid);
+
+       return rtn->rds_tcp_listen_sock->sk->sk_user_data;
+}
+
 static int rds_tcp_dev_event(struct notifier_block *this,
                             unsigned long event, void *ptr)
 {
index 1c3160f..9a1cc89 100644 (file)
@@ -70,6 +70,7 @@ void rds_tcp_listen_stop(struct socket *);
 void rds_tcp_listen_data_ready(struct sock *sk);
 int rds_tcp_accept_one(struct socket *sock);
 int rds_tcp_keepalive(struct socket *sock);
+void *rds_tcp_listen_sock_def_readable(struct net *net);
 
 /* tcp_recv.c */
 int rds_tcp_recv_init(void);
index c916715..05f61c5 100644 (file)
@@ -34,7 +34,6 @@
 #include <linux/in.h>
 #include <net/tcp.h>
 
-#include "rds_single_path.h"
 #include "rds.h"
 #include "tcp.h"
 
@@ -82,6 +81,12 @@ int rds_tcp_conn_path_connect(struct rds_conn_path *cp)
        struct rds_connection *conn = cp->cp_conn;
        struct rds_tcp_connection *tc = cp->cp_transport_data;
 
+       /* for multipath rds,we only trigger the connection after
+        * the handshake probe has determined the number of paths.
+        */
+       if (cp->cp_index > 0 && cp->cp_conn->c_npaths < 2)
+               return -EAGAIN;
+
        mutex_lock(&tc->t_conn_path_lock);
 
        if (rds_conn_path_up(cp)) {
index ca975a2..e0b23fb 100644 (file)
@@ -35,7 +35,6 @@
 #include <linux/in.h>
 #include <net/tcp.h>
 
-#include "rds_single_path.h"
 #include "rds.h"
 #include "tcp.h"
 
@@ -71,6 +70,52 @@ bail:
        return ret;
 }
 
+/* rds_tcp_accept_one_path(): if accepting on cp_index > 0, make sure the
+ * client's ipaddr < server's ipaddr. Otherwise, close the accepted
+ * socket and force a reconneect from smaller -> larger ip addr. The reason
+ * we special case cp_index 0 is to allow the rds probe ping itself to itself
+ * get through efficiently.
+ * Since reconnects are only initiated from the node with the numerically
+ * smaller ip address, we recycle conns in RDS_CONN_ERROR on the passive side
+ * by moving them to CONNECTING in this function.
+ */
+struct rds_tcp_connection *rds_tcp_accept_one_path(struct rds_connection *conn)
+{
+       int i;
+       bool peer_is_smaller = (conn->c_faddr < conn->c_laddr);
+       int npaths = conn->c_npaths;
+
+       if (npaths <= 1) {
+               struct rds_conn_path *cp = &conn->c_path[0];
+               int ret;
+
+               ret = rds_conn_path_transition(cp, RDS_CONN_DOWN,
+                                              RDS_CONN_CONNECTING);
+               if (!ret)
+                       rds_conn_path_transition(cp, RDS_CONN_ERROR,
+                                                RDS_CONN_CONNECTING);
+               return cp->cp_transport_data;
+       }
+
+       /* for mprds, paths with cp_index > 0 MUST be initiated by the peer
+        * with the smaller address.
+        */
+       if (!peer_is_smaller)
+               return NULL;
+
+       for (i = 1; i < npaths; i++) {
+               struct rds_conn_path *cp = &conn->c_path[i];
+
+               if (rds_conn_path_transition(cp, RDS_CONN_DOWN,
+                                            RDS_CONN_CONNECTING) ||
+                   rds_conn_path_transition(cp, RDS_CONN_ERROR,
+                                            RDS_CONN_CONNECTING)) {
+                       return cp->cp_transport_data;
+               }
+       }
+       return NULL;
+}
+
 int rds_tcp_accept_one(struct socket *sock)
 {
        struct socket *new_sock = NULL;
@@ -120,12 +165,14 @@ int rds_tcp_accept_one(struct socket *sock)
         * If the client reboots, this conn will need to be cleaned up.
         * rds_tcp_state_change() will do that cleanup
         */
-       rs_tcp = (struct rds_tcp_connection *)conn->c_transport_data;
-       cp = &conn->c_path[0];
-       rds_conn_transition(conn, RDS_CONN_DOWN, RDS_CONN_CONNECTING);
+       rs_tcp = rds_tcp_accept_one_path(conn);
+       if (!rs_tcp)
+               goto rst_nsk;
        mutex_lock(&rs_tcp->t_conn_path_lock);
-       conn_state = rds_conn_state(conn);
-       if (conn_state != RDS_CONN_CONNECTING && conn_state != RDS_CONN_UP)
+       cp = rs_tcp->t_cpath;
+       conn_state = rds_conn_path_state(cp);
+       if (conn_state != RDS_CONN_CONNECTING && conn_state != RDS_CONN_UP &&
+           conn_state != RDS_CONN_ERROR)
                goto rst_nsk;
        if (rs_tcp->t_sock) {
                /* Need to resolve a duelling SYN between peers.
@@ -135,11 +182,11 @@ int rds_tcp_accept_one(struct socket *sock)
                 * c_transport_data.
                 */
                if (ntohl(inet->inet_saddr) < ntohl(inet->inet_daddr) ||
-                   !conn->c_path[0].cp_outgoing) {
+                   !cp->cp_outgoing) {
                        goto rst_nsk;
                } else {
                        rds_tcp_reset_callbacks(new_sock, cp);
-                       conn->c_path[0].cp_outgoing = 0;
+                       cp->cp_outgoing = 0;
                        /* rds_connect_path_complete() marks RDS_CONN_UP */
                        rds_connect_path_complete(cp, RDS_CONN_RESETTING);
                }
@@ -183,6 +230,8 @@ void rds_tcp_listen_data_ready(struct sock *sk)
         */
        if (sk->sk_state == TCP_LISTEN)
                rds_tcp_accept_work(sk);
+       else
+               ready = rds_tcp_listen_sock_def_readable(sock_net(sk));
 
 out:
        read_unlock_bh(&sk->sk_callback_lock);
index 57e0f58..89d09b4 100644 (file)
@@ -81,7 +81,8 @@ static int rds_tcp_sendmsg(struct socket *sock, void *data, unsigned int len)
 int rds_tcp_xmit(struct rds_connection *conn, struct rds_message *rm,
                 unsigned int hdr_off, unsigned int sg, unsigned int off)
 {
-       struct rds_tcp_connection *tc = conn->c_transport_data;
+       struct rds_conn_path *cp = rm->m_inc.i_conn_path;
+       struct rds_tcp_connection *tc = cp->cp_transport_data;
        int done = 0;
        int ret = 0;
        int more;
@@ -150,10 +151,17 @@ out:
                        rds_tcp_stats_inc(s_tcp_sndbuf_full);
                        ret = 0;
                } else {
-                       printk(KERN_WARNING "RDS/tcp: send to %pI4 "
-                              "returned %d, disconnecting and reconnecting\n",
-                              &conn->c_faddr, ret);
-                       rds_conn_drop(conn);
+                       /* No need to disconnect/reconnect if path_drop
+                        * has already been triggered, because, e.g., of
+                        * an incoming RST.
+                        */
+                       if (rds_conn_path_up(cp)) {
+                               pr_warn("RDS/tcp: send to %pI4 on cp [%d]"
+                                       "returned %d, "
+                                       "disconnecting and reconnecting\n",
+                                       &conn->c_faddr, cp->cp_index, ret);
+                               rds_conn_path_drop(cp);
+                       }
                }
        }
        if (done == 0)
index bc97d67..e42df11 100644 (file)
@@ -156,6 +156,8 @@ void rds_connect_worker(struct work_struct *work)
        struct rds_connection *conn = cp->cp_conn;
        int ret;
 
+       if (cp->cp_index > 1 && cp->cp_conn->c_laddr > cp->cp_conn->c_faddr)
+               return;
        clear_bit(RDS_RECONNECT_PENDING, &cp->cp_flags);
        ret = rds_conn_path_transition(cp, RDS_CONN_DOWN, RDS_CONN_CONNECTING);
        if (ret) {
index 79c4abc..0a63947 100644 (file)
@@ -164,7 +164,8 @@ static int rose_state3_machine(struct sock *sk, struct sk_buff *skb, int framety
                rose_frames_acked(sk, nr);
                if (ns == rose->vr) {
                        rose_start_idletimer(sk);
-                       if (sock_queue_rcv_skb(sk, skb) == 0) {
+                       if (sk_filter_trim_cap(sk, skb, ROSE_MIN_LEN) == 0 &&
+                           __sock_queue_rcv_skb(sk, skb) == 0) {
                                rose->vr = (rose->vr + 1) % ROSE_MODULUS;
                                queued = 1;
                        } else {
index 6522e50..10f3f48 100644 (file)
@@ -10,6 +10,7 @@ af-rxrpc-y := \
        conn_client.o \
        conn_event.o \
        conn_object.o \
+       conn_service.o \
        input.o \
        insecure.o \
        key.o \
index 5d3e795..88effad 100644 (file)
@@ -766,9 +766,9 @@ error_key_type:
 error_sock:
        proto_unregister(&rxrpc_proto);
 error_proto:
-       destroy_workqueue(rxrpc_workqueue);
-error_security:
        rxrpc_exit_security();
+error_security:
+       destroy_workqueue(rxrpc_workqueue);
 error_work_queue:
        kmem_cache_destroy(rxrpc_call_jar);
 error_call_jar:
@@ -788,27 +788,7 @@ static void __exit af_rxrpc_exit(void)
        proto_unregister(&rxrpc_proto);
        rxrpc_destroy_all_calls();
        rxrpc_destroy_all_connections();
-
        ASSERTCMP(atomic_read(&rxrpc_n_skbs), ==, 0);
-
-       /* We need to flush the scheduled work twice because the local endpoint
-        * records involve a work item in their destruction as they can only be
-        * destroyed from process context.  However, a connection may have a
-        * work item outstanding - and this will pin the local endpoint record
-        * until the connection goes away.
-        *
-        * Peers don't pin locals and calls pin sockets - which prevents the
-        * module from being unloaded - so we should only need two flushes.
-        */
-       _debug("flush scheduled work");
-       flush_workqueue(rxrpc_workqueue);
-       _debug("flush scheduled work 2");
-       flush_workqueue(rxrpc_workqueue);
-       _debug("synchronise RCU");
-       rcu_barrier();
-       _debug("destroy locals");
-       ASSERT(idr_is_empty(&rxrpc_client_conn_ids));
-       idr_destroy(&rxrpc_client_conn_ids);
        rxrpc_destroy_all_locals();
 
        remove_proc_entry("rxrpc_conns", init_net.proc_net);
index 702db72..1bb9e7a 100644 (file)
@@ -10,6 +10,7 @@
  */
 
 #include <linux/atomic.h>
+#include <linux/seqlock.h>
 #include <net/sock.h>
 #include <net/af_rxrpc.h>
 #include <rxrpc/packet.h>
@@ -35,7 +36,6 @@ struct rxrpc_crypt {
        queue_delayed_work(rxrpc_workqueue, (WS), (D))
 
 #define rxrpc_queue_call(CALL) rxrpc_queue_work(&(CALL)->processor)
-#define rxrpc_queue_conn(CONN) rxrpc_queue_work(&(CONN)->processor)
 
 struct rxrpc_connection;
 
@@ -141,17 +141,16 @@ struct rxrpc_security {
        int (*init_connection_security)(struct rxrpc_connection *);
 
        /* prime a connection's packet security */
-       void (*prime_packet_security)(struct rxrpc_connection *);
+       int (*prime_packet_security)(struct rxrpc_connection *);
 
        /* impose security on a packet */
-       int (*secure_packet)(const struct rxrpc_call *,
+       int (*secure_packet)(struct rxrpc_call *,
                             struct sk_buff *,
                             size_t,
                             void *);
 
        /* verify the security on a received packet */
-       int (*verify_packet)(const struct rxrpc_call *, struct sk_buff *,
-                            u32 *);
+       int (*verify_packet)(struct rxrpc_call *, struct sk_buff *, u32 *);
 
        /* issue a challenge */
        int (*issue_challenge)(struct rxrpc_connection *);
@@ -208,7 +207,7 @@ struct rxrpc_peer {
        struct hlist_head       error_targets;  /* targets for net error distribution */
        struct work_struct      error_distributor;
        struct rb_root          service_conns;  /* Service connections */
-       rwlock_t                conn_lock;
+       seqlock_t               service_conn_lock;
        spinlock_t              lock;           /* access lock */
        unsigned int            if_mtu;         /* interface MTU for this peer */
        unsigned int            mtu;            /* network MTU for this peer */
@@ -231,18 +230,12 @@ struct rxrpc_peer {
  * Keys for matching a connection.
  */
 struct rxrpc_conn_proto {
-       unsigned long           hash_key;
-       struct rxrpc_local      *local;         /* Representation of local endpoint */
-       u32                     epoch;          /* epoch of this connection */
-       u32                     cid;            /* connection ID */
-       u8                      in_clientflag;  /* RXRPC_CLIENT_INITIATED if we are server */
-       u8                      addr_size;      /* Size of the address */
-       sa_family_t             family;         /* Transport protocol */
-       __be16                  port;           /* Peer UDP/UDP6 port */
-       union {                                 /* Peer address */
-               struct in_addr  ipv4_addr;
-               struct in6_addr ipv6_addr;
-               u32             raw_addr[0];
+       union {
+               struct {
+                       u32     epoch;          /* epoch of this connection */
+                       u32     cid;            /* connection ID */
+               };
+               u64             index_key;
        };
 };
 
@@ -255,6 +248,37 @@ struct rxrpc_conn_parameters {
        u32                     security_level; /* Security level selected */
 };
 
+/*
+ * Bits in the connection flags.
+ */
+enum rxrpc_conn_flag {
+       RXRPC_CONN_HAS_IDR,             /* Has a client conn ID assigned */
+       RXRPC_CONN_IN_SERVICE_CONNS,    /* Conn is in peer->service_conns */
+       RXRPC_CONN_IN_CLIENT_CONNS,     /* Conn is in local->client_conns */
+};
+
+/*
+ * Events that can be raised upon a connection.
+ */
+enum rxrpc_conn_event {
+       RXRPC_CONN_EV_CHALLENGE,        /* Send challenge packet */
+};
+
+/*
+ * The connection protocol state.
+ */
+enum rxrpc_conn_proto_state {
+       RXRPC_CONN_UNUSED,              /* Connection not yet attempted */
+       RXRPC_CONN_CLIENT,              /* Client connection */
+       RXRPC_CONN_SERVICE_UNSECURED,   /* Service unsecured connection */
+       RXRPC_CONN_SERVICE_CHALLENGING, /* Service challenging for security */
+       RXRPC_CONN_SERVICE,             /* Service secured connection */
+       RXRPC_CONN_REMOTELY_ABORTED,    /* Conn aborted by peer */
+       RXRPC_CONN_LOCALLY_ABORTED,     /* Conn aborted locally */
+       RXRPC_CONN_NETWORK_ERROR,       /* Conn terminated by network error */
+       RXRPC_CONN__NR_STATES
+};
+
 /*
  * RxRPC connection definition
  * - matched by { local, peer, epoch, conn_id, direction }
@@ -265,44 +289,38 @@ struct rxrpc_connection {
        struct rxrpc_conn_parameters params;
 
        spinlock_t              channel_lock;
-       struct rxrpc_call       *channels[RXRPC_MAXCALLS]; /* active calls */
+
+       struct rxrpc_channel {
+               struct rxrpc_call __rcu *call;          /* Active call */
+               u32                     call_id;        /* ID of current call */
+               u32                     call_counter;   /* Call ID counter */
+               u32                     last_call;      /* ID of last call */
+               u32                     last_result;    /* Result of last call (0/abort) */
+       } channels[RXRPC_MAXCALLS];
        wait_queue_head_t       channel_wq;     /* queue to wait for channel to become available */
 
+       struct rcu_head         rcu;
        struct work_struct      processor;      /* connection event processor */
        union {
                struct rb_node  client_node;    /* Node in local->client_conns */
                struct rb_node  service_node;   /* Node in peer->service_conns */
        };
        struct list_head        link;           /* link in master connection list */
-       struct rb_root          calls;          /* calls on this connection */
        struct sk_buff_head     rx_queue;       /* received conn-level packets */
        const struct rxrpc_security *security;  /* applied security module */
        struct key              *server_key;    /* security for this service */
        struct crypto_skcipher  *cipher;        /* encryption handle */
        struct rxrpc_crypt      csum_iv;        /* packet checksum base */
        unsigned long           flags;
-#define RXRPC_CONN_HAS_IDR     0               /* - Has a client conn ID assigned */
        unsigned long           events;
-#define RXRPC_CONN_CHALLENGE   0               /* send challenge packet */
        unsigned long           put_time;       /* Time at which last put */
-       rwlock_t                lock;           /* access lock */
        spinlock_t              state_lock;     /* state-change lock */
        atomic_t                usage;
-       enum {                                  /* current state of connection */
-               RXRPC_CONN_UNUSED,              /* - connection not yet attempted */
-               RXRPC_CONN_CLIENT,              /* - client connection */
-               RXRPC_CONN_SERVER_UNSECURED,    /* - server unsecured connection */
-               RXRPC_CONN_SERVER_CHALLENGING,  /* - server challenging for security */
-               RXRPC_CONN_SERVER,              /* - server secured connection */
-               RXRPC_CONN_REMOTELY_ABORTED,    /* - conn aborted by peer */
-               RXRPC_CONN_LOCALLY_ABORTED,     /* - conn aborted locally */
-               RXRPC_CONN_NETWORK_ERROR,       /* - conn terminated by network error */
-       } state;
+       enum rxrpc_conn_proto_state state : 8;  /* current state of connection */
        u32                     local_abort;    /* local abort code */
        u32                     remote_abort;   /* remote abort code */
        int                     error;          /* local error incurred */
        int                     debug_id;       /* debug ID for printks */
-       unsigned int            call_counter;   /* call ID counter */
        atomic_t                serial;         /* packet serial number counter */
        atomic_t                hi_serial;      /* highest serial number received */
        atomic_t                avail_chans;    /* number of channels available */
@@ -382,6 +400,7 @@ enum rxrpc_call_state {
  * - matched by { connection, call_id }
  */
 struct rxrpc_call {
+       struct rcu_head         rcu;
        struct rxrpc_connection *conn;          /* connection carrying call */
        struct rxrpc_sock       *socket;        /* socket responsible */
        struct timer_list       lifetimer;      /* lifetime remaining on call */
@@ -394,11 +413,11 @@ struct rxrpc_call {
        struct hlist_node       error_link;     /* link in error distribution list */
        struct list_head        accept_link;    /* calls awaiting acceptance */
        struct rb_node          sock_node;      /* node in socket call tree */
-       struct rb_node          conn_node;      /* node in connection call tree */
        struct sk_buff_head     rx_queue;       /* received packets */
        struct sk_buff_head     rx_oos_queue;   /* packets received out of sequence */
        struct sk_buff          *tx_pending;    /* Tx socket buffer being filled */
        wait_queue_head_t       tx_waitq;       /* wait for Tx window space to become available */
+       __be32                  crypto_buf[2];  /* Temporary packet crypto buffer */
        unsigned long           user_call_ID;   /* user-defined call ID */
        unsigned long           creation_jif;   /* time of call creation */
        unsigned long           flags;
@@ -442,19 +461,12 @@ struct rxrpc_call {
 #define RXRPC_ACKR_WINDOW_ASZ DIV_ROUND_UP(RXRPC_MAXACKS, BITS_PER_LONG)
        unsigned long           ackr_window[RXRPC_ACKR_WINDOW_ASZ + 1];
 
-       struct hlist_node       hash_node;
-       unsigned long           hash_key;       /* Full hash key */
-       u8                      in_clientflag;  /* Copy of conn->in_clientflag for hashing */
-       struct rxrpc_local      *local;         /* Local endpoint. Used for hashing. */
-       sa_family_t             family;         /* Frame protocol */
+       u8                      in_clientflag;  /* Copy of conn->in_clientflag */
+       struct rxrpc_local      *local;         /* Local endpoint. */
        u32                     call_id;        /* call ID on connection  */
        u32                     cid;            /* connection ID plus channel index */
        u32                     epoch;          /* epoch of this connection */
        u16                     service_id;     /* service ID */
-       union {                                 /* Peer IP address for hashing */
-               __be32  ipv4_addr;
-               __u8    ipv6_addr[16];          /* Anticipates eventual IPv6 support */
-       } peer_ip;
 };
 
 /*
@@ -502,8 +514,6 @@ extern struct kmem_cache *rxrpc_call_jar;
 extern struct list_head rxrpc_calls;
 extern rwlock_t rxrpc_call_lock;
 
-struct rxrpc_call *rxrpc_find_call_hash(struct rxrpc_host_header *,
-                                       void *, sa_family_t, const void *);
 struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *, unsigned long);
 struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *,
                                         struct rxrpc_conn_parameters *,
@@ -522,8 +532,10 @@ void __exit rxrpc_destroy_all_calls(void);
  */
 extern struct idr rxrpc_client_conn_ids;
 
-int rxrpc_get_client_connection_id(struct rxrpc_connection *, gfp_t);
-void rxrpc_put_client_connection_id(struct rxrpc_connection *);
+void rxrpc_destroy_client_conn_ids(void);
+int rxrpc_connect_call(struct rxrpc_call *, struct rxrpc_conn_parameters *,
+                      struct sockaddr_rxrpc *, gfp_t);
+void rxrpc_unpublish_client_conn(struct rxrpc_connection *);
 
 /*
  * conn_event.c
@@ -539,17 +551,14 @@ extern unsigned int rxrpc_connection_expiry;
 extern struct list_head rxrpc_connections;
 extern rwlock_t rxrpc_connection_lock;
 
-int rxrpc_connect_call(struct rxrpc_call *, struct rxrpc_conn_parameters *,
-                      struct sockaddr_rxrpc *, gfp_t);
-struct rxrpc_connection *rxrpc_find_connection(struct rxrpc_local *,
-                                              struct rxrpc_peer *,
-                                              struct sk_buff *);
+int rxrpc_extract_addr_from_skb(struct sockaddr_rxrpc *, struct sk_buff *);
+struct rxrpc_connection *rxrpc_alloc_connection(gfp_t);
+struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *,
+                                                  struct sk_buff *);
+void __rxrpc_disconnect_call(struct rxrpc_call *);
 void rxrpc_disconnect_call(struct rxrpc_call *);
 void rxrpc_put_connection(struct rxrpc_connection *);
 void __exit rxrpc_destroy_all_connections(void);
-struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *,
-                                                  struct rxrpc_peer *,
-                                                  struct sk_buff *);
 
 static inline bool rxrpc_conn_is_client(const struct rxrpc_connection *conn)
 {
@@ -558,7 +567,7 @@ static inline bool rxrpc_conn_is_client(const struct rxrpc_connection *conn)
 
 static inline bool rxrpc_conn_is_service(const struct rxrpc_connection *conn)
 {
-       return conn->proto.in_clientflag;
+       return !rxrpc_conn_is_client(conn);
 }
 
 static inline void rxrpc_get_connection(struct rxrpc_connection *conn)
@@ -566,6 +575,31 @@ static inline void rxrpc_get_connection(struct rxrpc_connection *conn)
        atomic_inc(&conn->usage);
 }
 
+static inline
+struct rxrpc_connection *rxrpc_get_connection_maybe(struct rxrpc_connection *conn)
+{
+       return atomic_inc_not_zero(&conn->usage) ? conn : NULL;
+}
+
+static inline bool rxrpc_queue_conn(struct rxrpc_connection *conn)
+{
+       if (!rxrpc_get_connection_maybe(conn))
+               return false;
+       if (!rxrpc_queue_work(&conn->processor))
+               rxrpc_put_connection(conn);
+       return true;
+}
+
+/*
+ * conn_service.c
+ */
+struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *,
+                                                    struct sk_buff *);
+struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *,
+                                                  struct sockaddr_rxrpc *,
+                                                  struct sk_buff *);
+void rxrpc_unpublish_service_conn(struct rxrpc_connection *);
+
 /*
  * input.c
  */
@@ -618,6 +652,11 @@ static inline void rxrpc_put_local(struct rxrpc_local *local)
                __rxrpc_put_local(local);
 }
 
+static inline void rxrpc_queue_local(struct rxrpc_local *local)
+{
+       rxrpc_queue_work(&local->processor);
+}
+
 /*
  * misc.c
  */
@@ -722,8 +761,7 @@ static inline void rxrpc_sysctl_exit(void) {}
 /*
  * utils.c
  */
-void rxrpc_get_addr_from_skb(struct rxrpc_local *, const struct sk_buff *,
-                            struct sockaddr_rxrpc *);
+int rxrpc_extract_addr_from_skb(struct sockaddr_rxrpc *, struct sk_buff *);
 
 /*
  * debug tracing
index 202e053..0b28321 100644 (file)
@@ -75,7 +75,6 @@ static int rxrpc_accept_incoming_call(struct rxrpc_local *local,
 {
        struct rxrpc_connection *conn;
        struct rxrpc_skb_priv *sp, *nsp;
-       struct rxrpc_peer *peer;
        struct rxrpc_call *call;
        struct sk_buff *notification;
        int ret;
@@ -94,15 +93,7 @@ static int rxrpc_accept_incoming_call(struct rxrpc_local *local,
        rxrpc_new_skb(notification);
        notification->mark = RXRPC_SKB_MARK_NEW_CALL;
 
-       peer = rxrpc_lookup_peer(local, srx, GFP_NOIO);
-       if (!peer) {
-               _debug("no peer");
-               ret = -EBUSY;
-               goto error;
-       }
-
-       conn = rxrpc_incoming_connection(local, peer, skb);
-       rxrpc_put_peer(peer);
+       conn = rxrpc_incoming_connection(local, srx, skb);
        if (IS_ERR(conn)) {
                _debug("no conn");
                ret = PTR_ERR(conn);
@@ -128,12 +119,11 @@ static int rxrpc_accept_incoming_call(struct rxrpc_local *local,
 
                spin_lock(&call->conn->state_lock);
                if (sp->hdr.securityIndex > 0 &&
-                   call->conn->state == RXRPC_CONN_SERVER_UNSECURED) {
+                   call->conn->state == RXRPC_CONN_SERVICE_UNSECURED) {
                        _debug("await conn sec");
                        list_add_tail(&call->accept_link, &rx->secureq);
-                       call->conn->state = RXRPC_CONN_SERVER_CHALLENGING;
-                       rxrpc_get_connection(call->conn);
-                       set_bit(RXRPC_CONN_CHALLENGE, &call->conn->events);
+                       call->conn->state = RXRPC_CONN_SERVICE_CHALLENGING;
+                       set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events);
                        rxrpc_queue_conn(call->conn);
                } else {
                        _debug("conn ready");
@@ -227,20 +217,8 @@ void rxrpc_accept_incoming_calls(struct rxrpc_local *local)
        whdr._rsvd      = 0;
        whdr.serviceId  = htons(sp->hdr.serviceId);
 
-       /* determine the remote address */
-       memset(&srx, 0, sizeof(srx));
-       srx.srx_family = AF_RXRPC;
-       srx.transport.family = local->srx.transport.family;
-       srx.transport_type = local->srx.transport_type;
-       switch (srx.transport.family) {
-       case AF_INET:
-               srx.transport_len = sizeof(struct sockaddr_in);
-               srx.transport.sin.sin_port = udp_hdr(skb)->source;
-               srx.transport.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
-               break;
-       default:
-               goto busy;
-       }
+       if (rxrpc_extract_addr_from_skb(&srx, skb) < 0)
+               goto drop;
 
        /* get the socket providing the service */
        read_lock_bh(&local->services_lock);
@@ -286,6 +264,10 @@ busy:
        rxrpc_free_skb(skb);
        return;
 
+drop:
+       rxrpc_free_skb(skb);
+       return;
+
 invalid_service:
        skb->priority = RX_INVALID_OPERATION;
        rxrpc_reject_packet(local, skb);
index 0ba8429..fc32aa5 100644 (file)
@@ -858,11 +858,6 @@ void rxrpc_process_call(struct work_struct *work)
        iov[0].iov_len  = sizeof(whdr);
 
        /* deal with events of a final nature */
-       if (test_bit(RXRPC_CALL_EV_RELEASE, &call->events)) {
-               rxrpc_release_call(call);
-               clear_bit(RXRPC_CALL_EV_RELEASE, &call->events);
-       }
-
        if (test_bit(RXRPC_CALL_EV_RCVD_ERROR, &call->events)) {
                enum rxrpc_skb_mark mark;
                int error;
@@ -1094,7 +1089,7 @@ void rxrpc_process_call(struct work_struct *work)
 
                if (call->state == RXRPC_CALL_SERVER_SECURING) {
                        _debug("securing");
-                       write_lock(&call->conn->lock);
+                       write_lock(&call->socket->call_lock);
                        if (!test_bit(RXRPC_CALL_RELEASED, &call->flags) &&
                            !test_bit(RXRPC_CALL_EV_RELEASE, &call->events)) {
                                _debug("not released");
@@ -1102,7 +1097,7 @@ void rxrpc_process_call(struct work_struct *work)
                                list_move_tail(&call->accept_link,
                                               &call->socket->acceptq);
                        }
-                       write_unlock(&call->conn->lock);
+                       write_unlock(&call->socket->call_lock);
                        read_lock(&call->state_lock);
                        if (call->state < RXRPC_CALL_COMPLETE)
                                set_bit(RXRPC_CALL_EV_POST_ACCEPT, &call->events);
@@ -1144,6 +1139,11 @@ void rxrpc_process_call(struct work_struct *work)
                goto maybe_reschedule;
        }
 
+       if (test_bit(RXRPC_CALL_EV_RELEASE, &call->events)) {
+               rxrpc_release_call(call);
+               clear_bit(RXRPC_CALL_EV_RELEASE, &call->events);
+       }
+
        /* other events may have been raised since we started checking */
        goto maybe_reschedule;
 
index ad933da..91287c9 100644 (file)
@@ -14,7 +14,6 @@
 #include <linux/slab.h>
 #include <linux/module.h>
 #include <linux/circ_buf.h>
-#include <linux/hashtable.h>
 #include <linux/spinlock_types.h>
 #include <net/sock.h>
 #include <net/af_rxrpc.h>
@@ -61,142 +60,6 @@ static void rxrpc_dead_call_expired(unsigned long _call);
 static void rxrpc_ack_time_expired(unsigned long _call);
 static void rxrpc_resend_time_expired(unsigned long _call);
 
-static DEFINE_SPINLOCK(rxrpc_call_hash_lock);
-static DEFINE_HASHTABLE(rxrpc_call_hash, 10);
-
-/*
- * Hash function for rxrpc_call_hash
- */
-static unsigned long rxrpc_call_hashfunc(
-       u8              in_clientflag,
-       u32             cid,
-       u32             call_id,
-       u32             epoch,
-       u16             service_id,
-       sa_family_t     family,
-       void            *localptr,
-       unsigned int    addr_size,
-       const u8        *peer_addr)
-{
-       const u16 *p;
-       unsigned int i;
-       unsigned long key;
-
-       _enter("");
-
-       key = (unsigned long)localptr;
-       /* We just want to add up the __be32 values, so forcing the
-        * cast should be okay.
-        */
-       key += epoch;
-       key += service_id;
-       key += call_id;
-       key += (cid & RXRPC_CIDMASK) >> RXRPC_CIDSHIFT;
-       key += cid & RXRPC_CHANNELMASK;
-       key += in_clientflag;
-       key += family;
-       /* Step through the peer address in 16-bit portions for speed */
-       for (i = 0, p = (const u16 *)peer_addr; i < addr_size >> 1; i++, p++)
-               key += *p;
-       _leave(" key = 0x%lx", key);
-       return key;
-}
-
-/*
- * Add a call to the hashtable
- */
-static void rxrpc_call_hash_add(struct rxrpc_call *call)
-{
-       unsigned long key;
-       unsigned int addr_size = 0;
-
-       _enter("");
-       switch (call->family) {
-       case AF_INET:
-               addr_size = sizeof(call->peer_ip.ipv4_addr);
-               break;
-       case AF_INET6:
-               addr_size = sizeof(call->peer_ip.ipv6_addr);
-               break;
-       default:
-               break;
-       }
-       key = rxrpc_call_hashfunc(call->in_clientflag, call->cid,
-                                 call->call_id, call->epoch,
-                                 call->service_id, call->family,
-                                 call->conn->params.local, addr_size,
-                                 call->peer_ip.ipv6_addr);
-       /* Store the full key in the call */
-       call->hash_key = key;
-       spin_lock(&rxrpc_call_hash_lock);
-       hash_add_rcu(rxrpc_call_hash, &call->hash_node, key);
-       spin_unlock(&rxrpc_call_hash_lock);
-       _leave("");
-}
-
-/*
- * Remove a call from the hashtable
- */
-static void rxrpc_call_hash_del(struct rxrpc_call *call)
-{
-       _enter("");
-       spin_lock(&rxrpc_call_hash_lock);
-       hash_del_rcu(&call->hash_node);
-       spin_unlock(&rxrpc_call_hash_lock);
-       _leave("");
-}
-
-/*
- * Find a call in the hashtable and return it, or NULL if it
- * isn't there.
- */
-struct rxrpc_call *rxrpc_find_call_hash(
-       struct rxrpc_host_header *hdr,
-       void            *localptr,
-       sa_family_t     family,
-       const void      *peer_addr)
-{
-       unsigned long key;
-       unsigned int addr_size = 0;
-       struct rxrpc_call *call = NULL;
-       struct rxrpc_call *ret = NULL;
-       u8 in_clientflag = hdr->flags & RXRPC_CLIENT_INITIATED;
-
-       _enter("");
-       switch (family) {
-       case AF_INET:
-               addr_size = sizeof(call->peer_ip.ipv4_addr);
-               break;
-       case AF_INET6:
-               addr_size = sizeof(call->peer_ip.ipv6_addr);
-               break;
-       default:
-               break;
-       }
-
-       key = rxrpc_call_hashfunc(in_clientflag, hdr->cid, hdr->callNumber,
-                                 hdr->epoch, hdr->serviceId,
-                                 family, localptr, addr_size,
-                                 peer_addr);
-       hash_for_each_possible_rcu(rxrpc_call_hash, call, hash_node, key) {
-               if (call->hash_key == key &&
-                   call->call_id == hdr->callNumber &&
-                   call->cid == hdr->cid &&
-                   call->in_clientflag == in_clientflag &&
-                   call->service_id == hdr->serviceId &&
-                   call->family == family &&
-                   call->local == localptr &&
-                   memcmp(call->peer_ip.ipv6_addr, peer_addr,
-                          addr_size) == 0 &&
-                   call->epoch == hdr->epoch) {
-                       ret = call;
-                       break;
-               }
-       }
-       _leave(" = %p", ret);
-       return ret;
-}
-
 /*
  * find an extant server call
  * - called in process context with IRQs enabled
@@ -305,20 +168,7 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx,
        call->socket = rx;
        call->rx_data_post = 1;
 
-       /* Record copies of information for hashtable lookup */
-       call->family = rx->family;
        call->local = rx->local;
-       switch (call->family) {
-       case AF_INET:
-               call->peer_ip.ipv4_addr = srx->transport.sin.sin_addr.s_addr;
-               break;
-       case AF_INET6:
-               memcpy(call->peer_ip.ipv6_addr,
-                      srx->transport.sin6.sin6_addr.in6_u.u6_addr8,
-                      sizeof(call->peer_ip.ipv6_addr));
-               break;
-       }
-
        call->service_id = srx->srx_service;
        call->in_clientflag = 0;
 
@@ -345,9 +195,6 @@ static int rxrpc_begin_client_call(struct rxrpc_call *call,
 
        call->state = RXRPC_CALL_CLIENT_SEND_REQUEST;
 
-       /* Add the new call to the hashtable */
-       rxrpc_call_hash_add(call);
-
        spin_lock(&call->conn->params.peer->lock);
        hlist_add_head(&call->error_link, &call->conn->params.peer->error_targets);
        spin_unlock(&call->conn->params.peer->lock);
@@ -425,9 +272,10 @@ error:
        rxrpc_put_call(call);
 
        write_lock_bh(&rxrpc_call_lock);
-       list_del(&call->link);
+       list_del_init(&call->link);
        write_unlock_bh(&rxrpc_call_lock);
 
+       call->state = RXRPC_CALL_DEAD;
        rxrpc_put_call(call);
        _leave(" = %d", ret);
        return ERR_PTR(ret);
@@ -439,6 +287,7 @@ error:
         */
 found_user_ID_now_present:
        write_unlock(&rx->call_lock);
+       call->state = RXRPC_CALL_DEAD;
        rxrpc_put_call(call);
        _leave(" = -EEXIST [%p]", call);
        return ERR_PTR(-EEXIST);
@@ -454,8 +303,7 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
 {
        struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
        struct rxrpc_call *call, *candidate;
-       struct rb_node **p, *parent;
-       u32 call_id;
+       u32 call_id, chan;
 
        _enter(",%d", conn->debug_id);
 
@@ -465,20 +313,23 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
        if (!candidate)
                return ERR_PTR(-EBUSY);
 
+       chan = sp->hdr.cid & RXRPC_CHANNELMASK;
        candidate->socket       = rx;
        candidate->conn         = conn;
        candidate->cid          = sp->hdr.cid;
        candidate->call_id      = sp->hdr.callNumber;
-       candidate->channel      = sp->hdr.cid & RXRPC_CHANNELMASK;
+       candidate->channel      = chan;
        candidate->rx_data_post = 0;
        candidate->state        = RXRPC_CALL_SERVER_ACCEPTING;
        if (conn->security_ix > 0)
                candidate->state = RXRPC_CALL_SERVER_SECURING;
 
-       write_lock_bh(&conn->lock);
+       spin_lock(&conn->channel_lock);
 
        /* set the channel for this call */
-       call = conn->channels[candidate->channel];
+       call = rcu_dereference_protected(conn->channels[chan].call,
+                                        lockdep_is_held(&conn->channel_lock));
+
        _debug("channel[%u] is %p", candidate->channel, call);
        if (call && call->call_id == sp->hdr.callNumber) {
                /* already set; must've been a duplicate packet */
@@ -507,9 +358,9 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
                       call->debug_id, rxrpc_call_states[call->state]);
 
                if (call->state >= RXRPC_CALL_COMPLETE) {
-                       conn->channels[call->channel] = NULL;
+                       __rxrpc_disconnect_call(call);
                } else {
-                       write_unlock_bh(&conn->lock);
+                       spin_unlock(&conn->channel_lock);
                        kmem_cache_free(rxrpc_call_jar, candidate);
                        _leave(" = -EBUSY");
                        return ERR_PTR(-EBUSY);
@@ -519,33 +370,22 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
        /* check the call number isn't duplicate */
        _debug("check dup");
        call_id = sp->hdr.callNumber;
-       p = &conn->calls.rb_node;
-       parent = NULL;
-       while (*p) {
-               parent = *p;
-               call = rb_entry(parent, struct rxrpc_call, conn_node);
-
-               /* The tree is sorted in order of the __be32 value without
-                * turning it into host order.
-                */
-               if (call_id < call->call_id)
-                       p = &(*p)->rb_left;
-               else if (call_id > call->call_id)
-                       p = &(*p)->rb_right;
-               else
-                       goto old_call;
-       }
+
+       /* We just ignore calls prior to the current call ID.  Terminated calls
+        * are handled via the connection.
+        */
+       if (call_id <= conn->channels[chan].call_counter)
+               goto old_call; /* TODO: Just drop packet */
 
        /* make the call available */
        _debug("new call");
        call = candidate;
        candidate = NULL;
-       rb_link_node(&call->conn_node, parent, p);
-       rb_insert_color(&call->conn_node, &conn->calls);
-       conn->channels[call->channel] = call;
+       conn->channels[chan].call_counter = call_id;
+       rcu_assign_pointer(conn->channels[chan].call, call);
        sock_hold(&rx->sk);
        rxrpc_get_connection(conn);
-       write_unlock_bh(&conn->lock);
+       spin_unlock(&conn->channel_lock);
 
        spin_lock(&conn->params.peer->lock);
        hlist_add_head(&call->error_link, &conn->params.peer->error_targets);
@@ -555,27 +395,10 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
        list_add_tail(&call->link, &rxrpc_calls);
        write_unlock_bh(&rxrpc_call_lock);
 
-       /* Record copies of information for hashtable lookup */
-       call->family = rx->family;
        call->local = conn->params.local;
-       switch (call->family) {
-       case AF_INET:
-               call->peer_ip.ipv4_addr =
-                       conn->params.peer->srx.transport.sin.sin_addr.s_addr;
-               break;
-       case AF_INET6:
-               memcpy(call->peer_ip.ipv6_addr,
-                      conn->params.peer->srx.transport.sin6.sin6_addr.in6_u.u6_addr8,
-                      sizeof(call->peer_ip.ipv6_addr));
-               break;
-       default:
-               break;
-       }
        call->epoch = conn->proto.epoch;
        call->service_id = conn->params.service_id;
-       call->in_clientflag = conn->proto.in_clientflag;
-       /* Add the new call to the hashtable */
-       rxrpc_call_hash_add(call);
+       call->in_clientflag = RXRPC_CLIENT_INITIATED;
 
        _net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id);
 
@@ -585,19 +408,19 @@ struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx,
        return call;
 
 extant_call:
-       write_unlock_bh(&conn->lock);
+       spin_unlock(&conn->channel_lock);
        kmem_cache_free(rxrpc_call_jar, candidate);
        _leave(" = %p {%d} [extant]", call, call ? call->debug_id : -1);
        return call;
 
 aborted_call:
-       write_unlock_bh(&conn->lock);
+       spin_unlock(&conn->channel_lock);
        kmem_cache_free(rxrpc_call_jar, candidate);
        _leave(" = -ECONNABORTED");
        return ERR_PTR(-ECONNABORTED);
 
 old_call:
-       write_unlock_bh(&conn->lock);
+       spin_unlock(&conn->channel_lock);
        kmem_cache_free(rxrpc_call_jar, candidate);
        _leave(" = -ECONNRESET [old]");
        return ERR_PTR(-ECONNRESET);
@@ -626,6 +449,10 @@ void rxrpc_release_call(struct rxrpc_call *call)
         */
        _debug("RELEASE CALL %p (%d CONN %p)", call, call->debug_id, conn);
 
+       spin_lock(&conn->params.peer->lock);
+       hlist_del_init(&call->error_link);
+       spin_unlock(&conn->params.peer->lock);
+
        write_lock_bh(&rx->call_lock);
        if (!list_empty(&call->accept_link)) {
                _debug("unlinking once-pending call %p { e=%lx f=%lx }",
@@ -641,24 +468,17 @@ void rxrpc_release_call(struct rxrpc_call *call)
        write_unlock_bh(&rx->call_lock);
 
        /* free up the channel for reuse */
-       spin_lock(&conn->channel_lock);
-       write_lock_bh(&conn->lock);
-       write_lock(&call->state_lock);
-
-       rxrpc_disconnect_call(call);
-
-       spin_unlock(&conn->channel_lock);
+       write_lock_bh(&call->state_lock);
 
        if (call->state < RXRPC_CALL_COMPLETE &&
            call->state != RXRPC_CALL_CLIENT_FINAL_ACK) {
                _debug("+++ ABORTING STATE %d +++\n", call->state);
                call->state = RXRPC_CALL_LOCALLY_ABORTED;
                call->local_abort = RX_CALL_DEAD;
-               set_bit(RXRPC_CALL_EV_ABORT, &call->events);
-               rxrpc_queue_call(call);
        }
-       write_unlock(&call->state_lock);
-       write_unlock_bh(&conn->lock);
+       write_unlock_bh(&call->state_lock);
+
+       rxrpc_disconnect_call(call);
 
        /* clean up the Rx queue */
        if (!skb_queue_empty(&call->rx_queue) ||
@@ -791,6 +611,17 @@ void __rxrpc_put_call(struct rxrpc_call *call)
        _leave("");
 }
 
+/*
+ * Final call destruction under RCU.
+ */
+static void rxrpc_rcu_destroy_call(struct rcu_head *rcu)
+{
+       struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu);
+
+       rxrpc_purge_queue(&call->rx_queue);
+       kmem_cache_free(rxrpc_call_jar, call);
+}
+
 /*
  * clean up a call
  */
@@ -815,19 +646,7 @@ static void rxrpc_cleanup_call(struct rxrpc_call *call)
                return;
        }
 
-       if (call->conn) {
-               spin_lock(&call->conn->params.peer->lock);
-               hlist_del_init(&call->error_link);
-               spin_unlock(&call->conn->params.peer->lock);
-
-               write_lock_bh(&call->conn->lock);
-               rb_erase(&call->conn_node, &call->conn->calls);
-               write_unlock_bh(&call->conn->lock);
-               rxrpc_put_connection(call->conn);
-       }
-
-       /* Remove the call from the hash */
-       rxrpc_call_hash_del(call);
+       ASSERTCMP(call->conn, ==, NULL);
 
        if (call->acks_window) {
                _debug("kill Tx window %d",
@@ -855,7 +674,7 @@ static void rxrpc_cleanup_call(struct rxrpc_call *call)
        rxrpc_purge_queue(&call->rx_queue);
        ASSERT(skb_queue_empty(&call->rx_oos_queue));
        sock_put(&call->socket->sk);
-       kmem_cache_free(rxrpc_call_jar, call);
+       call_rcu(&call->rcu, rxrpc_rcu_destroy_call);
 }
 
 /*
index 82488d6..9e91f27 100644 (file)
@@ -33,7 +33,8 @@ static DEFINE_SPINLOCK(rxrpc_conn_id_lock);
  * client conns away from the current allocation point to try and keep the IDs
  * concentrated.  We will also need to retire connections from an old epoch.
  */
-int rxrpc_get_client_connection_id(struct rxrpc_connection *conn, gfp_t gfp)
+static int rxrpc_get_client_connection_id(struct rxrpc_connection *conn,
+                                         gfp_t gfp)
 {
        u32 epoch;
        int id;
@@ -83,7 +84,7 @@ error:
 /*
  * Release a connection ID for a client connection from the global pool.
  */
-void rxrpc_put_client_connection_id(struct rxrpc_connection *conn)
+static void rxrpc_put_client_connection_id(struct rxrpc_connection *conn)
 {
        if (test_bit(RXRPC_CONN_HAS_IDR, &conn->flags)) {
                spin_lock(&rxrpc_conn_id_lock);
@@ -92,3 +93,280 @@ void rxrpc_put_client_connection_id(struct rxrpc_connection *conn)
                spin_unlock(&rxrpc_conn_id_lock);
        }
 }
+
+/*
+ * Destroy the client connection ID tree.
+ */
+void rxrpc_destroy_client_conn_ids(void)
+{
+       struct rxrpc_connection *conn;
+       int id;
+
+       if (!idr_is_empty(&rxrpc_client_conn_ids)) {
+               idr_for_each_entry(&rxrpc_client_conn_ids, conn, id) {
+                       pr_err("AF_RXRPC: Leaked client conn %p {%d}\n",
+                              conn, atomic_read(&conn->usage));
+               }
+               BUG();
+       }
+
+       idr_destroy(&rxrpc_client_conn_ids);
+}
+
+/*
+ * Allocate a client connection.  The caller must take care to clear any
+ * padding bytes in *cp.
+ */
+static struct rxrpc_connection *
+rxrpc_alloc_client_connection(struct rxrpc_conn_parameters *cp, gfp_t gfp)
+{
+       struct rxrpc_connection *conn;
+       int ret;
+
+       _enter("");
+
+       conn = rxrpc_alloc_connection(gfp);
+       if (!conn) {
+               _leave(" = -ENOMEM");
+               return ERR_PTR(-ENOMEM);
+       }
+
+       conn->params            = *cp;
+       conn->out_clientflag    = RXRPC_CLIENT_INITIATED;
+       conn->state             = RXRPC_CONN_CLIENT;
+
+       ret = rxrpc_get_client_connection_id(conn, gfp);
+       if (ret < 0)
+               goto error_0;
+
+       ret = rxrpc_init_client_conn_security(conn);
+       if (ret < 0)
+               goto error_1;
+
+       ret = conn->security->prime_packet_security(conn);
+       if (ret < 0)
+               goto error_2;
+
+       write_lock(&rxrpc_connection_lock);
+       list_add_tail(&conn->link, &rxrpc_connections);
+       write_unlock(&rxrpc_connection_lock);
+
+       /* We steal the caller's peer ref. */
+       cp->peer = NULL;
+       rxrpc_get_local(conn->params.local);
+       key_get(conn->params.key);
+
+       _leave(" = %p", conn);
+       return conn;
+
+error_2:
+       conn->security->clear(conn);
+error_1:
+       rxrpc_put_client_connection_id(conn);
+error_0:
+       kfree(conn);
+       _leave(" = %d", ret);
+       return ERR_PTR(ret);
+}
+
+/*
+ * find a connection for a call
+ * - called in process context with IRQs enabled
+ */
+int rxrpc_connect_call(struct rxrpc_call *call,
+                      struct rxrpc_conn_parameters *cp,
+                      struct sockaddr_rxrpc *srx,
+                      gfp_t gfp)
+{
+       struct rxrpc_connection *conn, *candidate = NULL;
+       struct rxrpc_local *local = cp->local;
+       struct rb_node *p, **pp, *parent;
+       long diff;
+       int chan;
+
+       DECLARE_WAITQUEUE(myself, current);
+
+       _enter("{%d,%lx},", call->debug_id, call->user_call_ID);
+
+       cp->peer = rxrpc_lookup_peer(cp->local, srx, gfp);
+       if (!cp->peer)
+               return -ENOMEM;
+
+       if (!cp->exclusive) {
+               /* Search for a existing client connection unless this is going
+                * to be a connection that's used exclusively for a single call.
+                */
+               _debug("search 1");
+               spin_lock(&local->client_conns_lock);
+               p = local->client_conns.rb_node;
+               while (p) {
+                       conn = rb_entry(p, struct rxrpc_connection, client_node);
+
+#define cmp(X) ((long)conn->params.X - (long)cp->X)
+                       diff = (cmp(peer) ?:
+                               cmp(key) ?:
+                               cmp(security_level));
+                       if (diff < 0)
+                               p = p->rb_left;
+                       else if (diff > 0)
+                               p = p->rb_right;
+                       else
+                               goto found_extant_conn;
+               }
+               spin_unlock(&local->client_conns_lock);
+       }
+
+       /* We didn't find a connection or we want an exclusive one. */
+       _debug("get new conn");
+       candidate = rxrpc_alloc_client_connection(cp, gfp);
+       if (!candidate) {
+               _leave(" = -ENOMEM");
+               return -ENOMEM;
+       }
+
+       if (cp->exclusive) {
+               /* Assign the call on an exclusive connection to channel 0 and
+                * don't add the connection to the endpoint's shareable conn
+                * lookup tree.
+                */
+               _debug("exclusive chan 0");
+               conn = candidate;
+               atomic_set(&conn->avail_chans, RXRPC_MAXCALLS - 1);
+               spin_lock(&conn->channel_lock);
+               chan = 0;
+               goto found_channel;
+       }
+
+       /* We need to redo the search before attempting to add a new connection
+        * lest we race with someone else adding a conflicting instance.
+        */
+       _debug("search 2");
+       spin_lock(&local->client_conns_lock);
+
+       pp = &local->client_conns.rb_node;
+       parent = NULL;
+       while (*pp) {
+               parent = *pp;
+               conn = rb_entry(parent, struct rxrpc_connection, client_node);
+
+               diff = (cmp(peer) ?:
+                       cmp(key) ?:
+                       cmp(security_level));
+               if (diff < 0)
+                       pp = &(*pp)->rb_left;
+               else if (diff > 0)
+                       pp = &(*pp)->rb_right;
+               else
+                       goto found_extant_conn;
+       }
+
+       /* The second search also failed; simply add the new connection with
+        * the new call in channel 0.  Note that we need to take the channel
+        * lock before dropping the client conn lock.
+        */
+       _debug("new conn");
+       set_bit(RXRPC_CONN_IN_CLIENT_CONNS, &candidate->flags);
+       rb_link_node(&candidate->client_node, parent, pp);
+       rb_insert_color(&candidate->client_node, &local->client_conns);
+attached:
+       conn = candidate;
+       candidate = NULL;
+
+       atomic_set(&conn->avail_chans, RXRPC_MAXCALLS - 1);
+       spin_lock(&conn->channel_lock);
+       spin_unlock(&local->client_conns_lock);
+       chan = 0;
+
+found_channel:
+       _debug("found chan");
+       call->conn      = conn;
+       call->channel   = chan;
+       call->epoch     = conn->proto.epoch;
+       call->cid       = conn->proto.cid | chan;
+       call->call_id   = ++conn->channels[chan].call_counter;
+       conn->channels[chan].call_id = call->call_id;
+       rcu_assign_pointer(conn->channels[chan].call, call);
+
+       _net("CONNECT call %d on conn %d", call->debug_id, conn->debug_id);
+
+       spin_unlock(&conn->channel_lock);
+       rxrpc_put_peer(cp->peer);
+       cp->peer = NULL;
+       _leave(" = %p {u=%d}", conn, atomic_read(&conn->usage));
+       return 0;
+
+       /* We found a potentially suitable connection already in existence.  If
+        * we can reuse it (ie. its usage count hasn't been reduced to 0 by the
+        * reaper), discard any candidate we may have allocated, and try to get
+        * a channel on this one, otherwise we have to replace it.
+        */
+found_extant_conn:
+       _debug("found conn");
+       if (!rxrpc_get_connection_maybe(conn)) {
+               set_bit(RXRPC_CONN_IN_CLIENT_CONNS, &candidate->flags);
+               rb_replace_node(&conn->client_node,
+                               &candidate->client_node,
+                               &local->client_conns);
+               clear_bit(RXRPC_CONN_IN_CLIENT_CONNS, &conn->flags);
+               goto attached;
+       }
+
+       spin_unlock(&local->client_conns_lock);
+
+       rxrpc_put_connection(candidate);
+
+       if (!atomic_add_unless(&conn->avail_chans, -1, 0)) {
+               if (!gfpflags_allow_blocking(gfp)) {
+                       rxrpc_put_connection(conn);
+                       _leave(" = -EAGAIN");
+                       return -EAGAIN;
+               }
+
+               add_wait_queue(&conn->channel_wq, &myself);
+               for (;;) {
+                       set_current_state(TASK_INTERRUPTIBLE);
+                       if (atomic_add_unless(&conn->avail_chans, -1, 0))
+                               break;
+                       if (signal_pending(current))
+                               goto interrupted;
+                       schedule();
+               }
+               remove_wait_queue(&conn->channel_wq, &myself);
+               __set_current_state(TASK_RUNNING);
+       }
+
+       /* The connection allegedly now has a free channel and we can now
+        * attach the call to it.
+        */
+       spin_lock(&conn->channel_lock);
+
+       for (chan = 0; chan < RXRPC_MAXCALLS; chan++)
+               if (!conn->channels[chan].call)
+                       goto found_channel;
+       BUG();
+
+interrupted:
+       remove_wait_queue(&conn->channel_wq, &myself);
+       __set_current_state(TASK_RUNNING);
+       rxrpc_put_connection(conn);
+       rxrpc_put_peer(cp->peer);
+       cp->peer = NULL;
+       _leave(" = -ERESTARTSYS");
+       return -ERESTARTSYS;
+}
+
+/*
+ * Remove a client connection from the local endpoint's tree, thereby removing
+ * it as a target for reuse for new client calls.
+ */
+void rxrpc_unpublish_client_conn(struct rxrpc_connection *conn)
+{
+       struct rxrpc_local *local = conn->params.local;
+
+       spin_lock(&local->client_conns_lock);
+       if (test_and_clear_bit(RXRPC_CONN_IN_CLIENT_CONNS, &conn->flags))
+               rb_erase(&conn->client_node, &local->client_conns);
+       spin_unlock(&local->client_conns_lock);
+
+       rxrpc_put_client_connection_id(conn);
+}
index bf69715..cee0f35 100644 (file)
@@ -31,15 +31,17 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn, int state,
                              u32 abort_code)
 {
        struct rxrpc_call *call;
-       struct rb_node *p;
+       int i;
 
        _enter("{%d},%x", conn->debug_id, abort_code);
 
-       read_lock_bh(&conn->lock);
+       spin_lock(&conn->channel_lock);
 
-       for (p = rb_first(&conn->calls); p; p = rb_next(p)) {
-               call = rb_entry(p, struct rxrpc_call, conn_node);
-               write_lock(&call->state_lock);
+       for (i = 0; i < RXRPC_MAXCALLS; i++) {
+               call = rcu_dereference_protected(
+                       conn->channels[i].call,
+                       lockdep_is_held(&conn->channel_lock));
+               write_lock_bh(&call->state_lock);
                if (call->state <= RXRPC_CALL_COMPLETE) {
                        call->state = state;
                        if (state == RXRPC_CALL_LOCALLY_ABORTED) {
@@ -51,10 +53,10 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn, int state,
                        }
                        rxrpc_queue_call(call);
                }
-               write_unlock(&call->state_lock);
+               write_unlock_bh(&call->state_lock);
        }
 
-       read_unlock_bh(&conn->lock);
+       spin_unlock(&conn->channel_lock);
        _leave("");
 }
 
@@ -188,18 +190,24 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
                if (ret < 0)
                        return ret;
 
-               conn->security->prime_packet_security(conn);
-               read_lock_bh(&conn->lock);
+               ret = conn->security->prime_packet_security(conn);
+               if (ret < 0)
+                       return ret;
+
+               spin_lock(&conn->channel_lock);
                spin_lock(&conn->state_lock);
 
-               if (conn->state == RXRPC_CONN_SERVER_CHALLENGING) {
-                       conn->state = RXRPC_CONN_SERVER;
+               if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) {
+                       conn->state = RXRPC_CONN_SERVICE;
                        for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
-                               rxrpc_call_is_secure(conn->channels[loop]);
+                               rxrpc_call_is_secure(
+                                       rcu_dereference_protected(
+                                               conn->channels[loop].call,
+                                               lockdep_is_held(&conn->channel_lock)));
                }
 
                spin_unlock(&conn->state_lock);
-               read_unlock_bh(&conn->lock);
+               spin_unlock(&conn->channel_lock);
                return 0;
 
        default:
@@ -263,12 +271,8 @@ void rxrpc_process_connection(struct work_struct *work)
 
        _enter("{%d}", conn->debug_id);
 
-       rxrpc_get_connection(conn);
-
-       if (test_and_clear_bit(RXRPC_CONN_CHALLENGE, &conn->events)) {
+       if (test_and_clear_bit(RXRPC_CONN_EV_CHALLENGE, &conn->events))
                rxrpc_secure_connection(conn);
-               rxrpc_put_connection(conn);
-       }
 
        /* go through the conn-level event packets, releasing the ref on this
         * connection that each one has when we've finished with it */
@@ -283,7 +287,6 @@ void rxrpc_process_connection(struct work_struct *work)
                        goto requeue_and_leave;
                case -ECONNABORTED:
                default:
-                       rxrpc_put_connection(conn);
                        rxrpc_free_skb(skb);
                        break;
                }
@@ -301,7 +304,6 @@ requeue_and_leave:
 protocol_error:
        if (rxrpc_abort_connection(conn, -ret, abort_code) < 0)
                goto requeue_and_leave;
-       rxrpc_put_connection(conn);
        rxrpc_free_skb(skb);
        _leave(" [EPROTO]");
        goto out;
@@ -315,7 +317,7 @@ void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
        CHECK_SLAB_OKAY(&local->usage);
 
        skb_queue_tail(&local->reject_queue, skb);
-       rxrpc_queue_work(&local->processor);
+       rxrpc_queue_local(local);
 }
 
 /*
index 4bfad7c..896d844 100644 (file)
@@ -15,7 +15,6 @@
 #include <linux/slab.h>
 #include <linux/net.h>
 #include <linux/skbuff.h>
-#include <linux/crypto.h>
 #include <net/sock.h>
 #include <net/af_rxrpc.h>
 #include "ar-internal.h"
@@ -34,7 +33,7 @@ static DECLARE_DELAYED_WORK(rxrpc_connection_reap, rxrpc_connection_reaper);
 /*
  * allocate a new connection
  */
-static struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp)
+struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp)
 {
        struct rxrpc_connection *conn;
 
@@ -46,12 +45,13 @@ static struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp)
                init_waitqueue_head(&conn->channel_wq);
                INIT_WORK(&conn->processor, &rxrpc_process_connection);
                INIT_LIST_HEAD(&conn->link);
-               conn->calls = RB_ROOT;
                skb_queue_head_init(&conn->rx_queue);
                conn->security = &rxrpc_no_security;
-               rwlock_init(&conn->lock);
                spin_lock_init(&conn->state_lock);
-               atomic_set(&conn->usage, 1);
+               /* We maintain an extra ref on the connection whilst it is
+                * on the rxrpc_connections list.
+                */
+               atomic_set(&conn->usage, 2);
                conn->debug_id = atomic_inc_return(&rxrpc_debug_id);
                atomic_set(&conn->avail_chans, RXRPC_MAXCALLS);
                conn->size_align = 4;
@@ -63,465 +63,118 @@ static struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp)
 }
 
 /*
- * add a call to a connection's call-by-ID tree
- */
-static void rxrpc_add_call_ID_to_conn(struct rxrpc_connection *conn,
-                                     struct rxrpc_call *call)
-{
-       struct rxrpc_call *xcall;
-       struct rb_node *parent, **p;
-       __be32 call_id;
-
-       write_lock_bh(&conn->lock);
-
-       call_id = call->call_id;
-       p = &conn->calls.rb_node;
-       parent = NULL;
-       while (*p) {
-               parent = *p;
-               xcall = rb_entry(parent, struct rxrpc_call, conn_node);
-
-               if (call_id < xcall->call_id)
-                       p = &(*p)->rb_left;
-               else if (call_id > xcall->call_id)
-                       p = &(*p)->rb_right;
-               else
-                       BUG();
-       }
-
-       rb_link_node(&call->conn_node, parent, p);
-       rb_insert_color(&call->conn_node, &conn->calls);
-
-       write_unlock_bh(&conn->lock);
-}
-
-/*
- * Allocate a client connection.  The caller must take care to clear any
- * padding bytes in *cp.
+ * Look up a connection in the cache by protocol parameters.
+ *
+ * If successful, a pointer to the connection is returned, but no ref is taken.
+ * NULL is returned if there is no match.
+ *
+ * The caller must be holding the RCU read lock.
  */
-static struct rxrpc_connection *
-rxrpc_alloc_client_connection(struct rxrpc_conn_parameters *cp, gfp_t gfp)
+struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local,
+                                                  struct sk_buff *skb)
 {
        struct rxrpc_connection *conn;
-       int ret;
-
-       _enter("");
-
-       conn = rxrpc_alloc_connection(gfp);
-       if (!conn) {
-               _leave(" = -ENOMEM");
-               return ERR_PTR(-ENOMEM);
-       }
-
-       conn->params            = *cp;
-       conn->proto.local       = cp->local;
-       conn->proto.epoch       = rxrpc_epoch;
-       conn->proto.cid         = 0;
-       conn->proto.in_clientflag = 0;
-       conn->proto.family      = cp->peer->srx.transport.family;
-       conn->out_clientflag    = RXRPC_CLIENT_INITIATED;
-       conn->state             = RXRPC_CONN_CLIENT;
-
-       switch (conn->proto.family) {
-       case AF_INET:
-               conn->proto.addr_size = sizeof(conn->proto.ipv4_addr);
-               conn->proto.ipv4_addr = cp->peer->srx.transport.sin.sin_addr;
-               conn->proto.port = cp->peer->srx.transport.sin.sin_port;
-               break;
-       }
-
-       ret = rxrpc_get_client_connection_id(conn, gfp);
-       if (ret < 0)
-               goto error_0;
-
-       ret = rxrpc_init_client_conn_security(conn);
-       if (ret < 0)
-               goto error_1;
-
-       conn->security->prime_packet_security(conn);
-
-       write_lock(&rxrpc_connection_lock);
-       list_add_tail(&conn->link, &rxrpc_connections);
-       write_unlock(&rxrpc_connection_lock);
-
-       /* We steal the caller's peer ref. */
-       cp->peer = NULL;
-       rxrpc_get_local(conn->params.local);
-       key_get(conn->params.key);
-
-       _leave(" = %p", conn);
-       return conn;
-
-error_1:
-       rxrpc_put_client_connection_id(conn);
-error_0:
-       kfree(conn);
-       _leave(" = %d", ret);
-       return ERR_PTR(ret);
-}
-
-/*
- * find a connection for a call
- * - called in process context with IRQs enabled
- */
-int rxrpc_connect_call(struct rxrpc_call *call,
-                      struct rxrpc_conn_parameters *cp,
-                      struct sockaddr_rxrpc *srx,
-                      gfp_t gfp)
-{
-       struct rxrpc_connection *conn, *candidate = NULL;
-       struct rxrpc_local *local = cp->local;
-       struct rb_node *p, **pp, *parent;
-       long diff;
-       int chan;
+       struct rxrpc_conn_proto k;
+       struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+       struct sockaddr_rxrpc srx;
+       struct rxrpc_peer *peer;
 
-       DECLARE_WAITQUEUE(myself, current);
+       _enter(",%x", sp->hdr.cid & RXRPC_CIDMASK);
 
-       _enter("{%d,%lx},", call->debug_id, call->user_call_ID);
+       if (rxrpc_extract_addr_from_skb(&srx, skb) < 0)
+               goto not_found;
 
-       cp->peer = rxrpc_lookup_peer(cp->local, srx, gfp);
-       if (!cp->peer)
-               return -ENOMEM;
+       k.epoch = sp->hdr.epoch;
+       k.cid   = sp->hdr.cid & RXRPC_CIDMASK;
 
-       if (!cp->exclusive) {
-               /* Search for a existing client connection unless this is going
-                * to be a connection that's used exclusively for a single call.
-                */
-               _debug("search 1");
-               spin_lock(&local->client_conns_lock);
-               p = local->client_conns.rb_node;
-               while (p) {
-                       conn = rb_entry(p, struct rxrpc_connection, client_node);
-
-#define cmp(X) ((long)conn->params.X - (long)cp->X)
-                       diff = (cmp(peer) ?:
-                               cmp(key) ?:
-                               cmp(security_level));
-                       if (diff < 0)
-                               p = p->rb_left;
-                       else if (diff > 0)
-                               p = p->rb_right;
-                       else
-                               goto found_extant_conn;
-               }
-               spin_unlock(&local->client_conns_lock);
+       /* We may have to handle mixing IPv4 and IPv6 */
+       if (srx.transport.family != local->srx.transport.family) {
+               pr_warn_ratelimited("AF_RXRPC: Protocol mismatch %u not %u\n",
+                                   srx.transport.family,
+                                   local->srx.transport.family);
+               goto not_found;
        }
 
-       /* We didn't find a connection or we want an exclusive one. */
-       _debug("get new conn");
-       candidate = rxrpc_alloc_client_connection(cp, gfp);
-       if (!candidate) {
-               _leave(" = -ENOMEM");
-               return -ENOMEM;
-       }
+       k.epoch = sp->hdr.epoch;
+       k.cid   = sp->hdr.cid & RXRPC_CIDMASK;
 
-       if (cp->exclusive) {
-               /* Assign the call on an exclusive connection to channel 0 and
-                * don't add the connection to the endpoint's shareable conn
-                * lookup tree.
+       if (sp->hdr.flags & RXRPC_CLIENT_INITIATED) {
+               /* We need to look up service connections by the full protocol
+                * parameter set.  We look up the peer first as an intermediate
+                * step and then the connection from the peer's tree.
                 */
-               _debug("exclusive chan 0");
-               conn = candidate;
-               atomic_set(&conn->avail_chans, RXRPC_MAXCALLS - 1);
-               spin_lock(&conn->channel_lock);
-               chan = 0;
-               goto found_channel;
-       }
-
-       /* We need to redo the search before attempting to add a new connection
-        * lest we race with someone else adding a conflicting instance.
-        */
-       _debug("search 2");
-       spin_lock(&local->client_conns_lock);
-
-       pp = &local->client_conns.rb_node;
-       parent = NULL;
-       while (*pp) {
-               parent = *pp;
-               conn = rb_entry(parent, struct rxrpc_connection, client_node);
-
-               diff = (cmp(peer) ?:
-                       cmp(key) ?:
-                       cmp(security_level));
-               if (diff < 0)
-                       pp = &(*pp)->rb_left;
-               else if (diff > 0)
-                       pp = &(*pp)->rb_right;
-               else
-                       goto found_extant_conn;
-       }
-
-       /* The second search also failed; simply add the new connection with
-        * the new call in channel 0.  Note that we need to take the channel
-        * lock before dropping the client conn lock.
-        */
-       _debug("new conn");
-       conn = candidate;
-       candidate = NULL;
-
-       rb_link_node(&conn->client_node, parent, pp);
-       rb_insert_color(&conn->client_node, &local->client_conns);
-
-       atomic_set(&conn->avail_chans, RXRPC_MAXCALLS - 1);
-       spin_lock(&conn->channel_lock);
-       spin_unlock(&local->client_conns_lock);
-       chan = 0;
-
-found_channel:
-       _debug("found chan");
-       call->conn      = conn;
-       call->channel   = chan;
-       call->epoch     = conn->proto.epoch;
-       call->cid       = conn->proto.cid | chan;
-       call->call_id   = ++conn->call_counter;
-       rcu_assign_pointer(conn->channels[chan], call);
-
-       _net("CONNECT call %d on conn %d", call->debug_id, conn->debug_id);
-
-       rxrpc_add_call_ID_to_conn(conn, call);
-       spin_unlock(&conn->channel_lock);
-       rxrpc_put_peer(cp->peer);
-       cp->peer = NULL;
-       _leave(" = %p {u=%d}", conn, atomic_read(&conn->usage));
-       return 0;
-
-       /* We found a suitable connection already in existence.  Discard any
-        * candidate we may have allocated, and try to get a channel on this
-        * one.
-        */
-found_extant_conn:
-       _debug("found conn");
-       rxrpc_get_connection(conn);
-       spin_unlock(&local->client_conns_lock);
-
-       rxrpc_put_connection(candidate);
-
-       if (!atomic_add_unless(&conn->avail_chans, -1, 0)) {
-               if (!gfpflags_allow_blocking(gfp)) {
-                       rxrpc_put_connection(conn);
-                       _leave(" = -EAGAIN");
-                       return -EAGAIN;
+               peer = rxrpc_lookup_peer_rcu(local, &srx);
+               if (!peer)
+                       goto not_found;
+               conn = rxrpc_find_service_conn_rcu(peer, skb);
+               if (!conn || atomic_read(&conn->usage) == 0)
+                       goto not_found;
+               _leave(" = %p", conn);
+               return conn;
+       } else {
+               /* Look up client connections by connection ID alone as their
+                * IDs are unique for this machine.
+                */
+               conn = idr_find(&rxrpc_client_conn_ids,
+                               sp->hdr.cid >> RXRPC_CIDSHIFT);
+               if (!conn || atomic_read(&conn->usage) == 0) {
+                       _debug("no conn");
+                       goto not_found;
                }
 
-               add_wait_queue(&conn->channel_wq, &myself);
-               for (;;) {
-                       set_current_state(TASK_INTERRUPTIBLE);
-                       if (atomic_add_unless(&conn->avail_chans, -1, 0))
-                               break;
-                       if (signal_pending(current))
-                               goto interrupted;
-                       schedule();
+               if (conn->proto.epoch != k.epoch ||
+                   conn->params.local != local)
+                       goto not_found;
+
+               peer = conn->params.peer;
+               switch (srx.transport.family) {
+               case AF_INET:
+                       if (peer->srx.transport.sin.sin_port !=
+                           srx.transport.sin.sin_port ||
+                           peer->srx.transport.sin.sin_addr.s_addr !=
+                           srx.transport.sin.sin_addr.s_addr)
+                               goto not_found;
+                       break;
+               default:
+                       BUG();
                }
-               remove_wait_queue(&conn->channel_wq, &myself);
-               __set_current_state(TASK_RUNNING);
-       }
-
-       /* The connection allegedly now has a free channel and we can now
-        * attach the call to it.
-        */
-       spin_lock(&conn->channel_lock);
-
-       for (chan = 0; chan < RXRPC_MAXCALLS; chan++)
-               if (!conn->channels[chan])
-                       goto found_channel;
-       BUG();
-
-interrupted:
-       remove_wait_queue(&conn->channel_wq, &myself);
-       __set_current_state(TASK_RUNNING);
-       rxrpc_put_connection(conn);
-       rxrpc_put_peer(cp->peer);
-       cp->peer = NULL;
-       _leave(" = -ERESTARTSYS");
-       return -ERESTARTSYS;
-}
-
-/*
- * get a record of an incoming connection
- */
-struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *local,
-                                                  struct rxrpc_peer *peer,
-                                                  struct sk_buff *skb)
-{
-       struct rxrpc_connection *conn, *candidate = NULL;
-       struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
-       struct rb_node *p, **pp;
-       const char *new = "old";
-       __be32 epoch;
-       u32 cid;
-
-       _enter("");
-
-       ASSERT(sp->hdr.flags & RXRPC_CLIENT_INITIATED);
-
-       epoch = sp->hdr.epoch;
-       cid = sp->hdr.cid & RXRPC_CIDMASK;
-
-       /* search the connection list first */
-       read_lock_bh(&peer->conn_lock);
-
-       p = peer->service_conns.rb_node;
-       while (p) {
-               conn = rb_entry(p, struct rxrpc_connection, service_node);
 
-               _debug("maybe %x", conn->proto.cid);
-
-               if (epoch < conn->proto.epoch)
-                       p = p->rb_left;
-               else if (epoch > conn->proto.epoch)
-                       p = p->rb_right;
-               else if (cid < conn->proto.cid)
-                       p = p->rb_left;
-               else if (cid > conn->proto.cid)
-                       p = p->rb_right;
-               else
-                       goto found_extant_connection;
-       }
-       read_unlock_bh(&peer->conn_lock);
-
-       /* not yet present - create a candidate for a new record and then
-        * redo the search */
-       candidate = rxrpc_alloc_connection(GFP_NOIO);
-       if (!candidate) {
-               _leave(" = -ENOMEM");
-               return ERR_PTR(-ENOMEM);
+               _leave(" = %p", conn);
+               return conn;
        }
 
-       candidate->proto.local          = local;
-       candidate->proto.epoch          = sp->hdr.epoch;
-       candidate->proto.cid            = sp->hdr.cid & RXRPC_CIDMASK;
-       candidate->proto.in_clientflag  = RXRPC_CLIENT_INITIATED;
-       candidate->params.local         = local;
-       candidate->params.peer          = peer;
-       candidate->params.service_id    = sp->hdr.serviceId;
-       candidate->security_ix          = sp->hdr.securityIndex;
-       candidate->out_clientflag       = 0;
-       candidate->state                = RXRPC_CONN_SERVER;
-       if (candidate->params.service_id)
-               candidate->state        = RXRPC_CONN_SERVER_UNSECURED;
-
-       write_lock_bh(&peer->conn_lock);
-
-       pp = &peer->service_conns.rb_node;
-       p = NULL;
-       while (*pp) {
-               p = *pp;
-               conn = rb_entry(p, struct rxrpc_connection, service_node);
-
-               if (epoch < conn->proto.epoch)
-                       pp = &(*pp)->rb_left;
-               else if (epoch > conn->proto.epoch)
-                       pp = &(*pp)->rb_right;
-               else if (cid < conn->proto.cid)
-                       pp = &(*pp)->rb_left;
-               else if (cid > conn->proto.cid)
-                       pp = &(*pp)->rb_right;
-               else
-                       goto found_extant_second;
-       }
-
-       /* we can now add the new candidate to the list */
-       conn = candidate;
-       candidate = NULL;
-       rb_link_node(&conn->service_node, p, pp);
-       rb_insert_color(&conn->service_node, &peer->service_conns);
-       rxrpc_get_peer(peer);
-       rxrpc_get_local(local);
-
-       write_unlock_bh(&peer->conn_lock);
-
-       write_lock(&rxrpc_connection_lock);
-       list_add_tail(&conn->link, &rxrpc_connections);
-       write_unlock(&rxrpc_connection_lock);
-
-       new = "new";
-
-success:
-       _net("CONNECTION %s %d {%x}", new, conn->debug_id, conn->proto.cid);
-
-       _leave(" = %p {u=%d}", conn, atomic_read(&conn->usage));
-       return conn;
-
-       /* we found the connection in the list immediately */
-found_extant_connection:
-       if (sp->hdr.securityIndex != conn->security_ix) {
-               read_unlock_bh(&peer->conn_lock);
-               goto security_mismatch;
-       }
-       rxrpc_get_connection(conn);
-       read_unlock_bh(&peer->conn_lock);
-       goto success;
-
-       /* we found the connection on the second time through the list */
-found_extant_second:
-       if (sp->hdr.securityIndex != conn->security_ix) {
-               write_unlock_bh(&peer->conn_lock);
-               goto security_mismatch;
-       }
-       rxrpc_get_connection(conn);
-       write_unlock_bh(&peer->conn_lock);
-       kfree(candidate);
-       goto success;
-
-security_mismatch:
-       kfree(candidate);
-       _leave(" = -EKEYREJECTED");
-       return ERR_PTR(-EKEYREJECTED);
+not_found:
+       _leave(" = NULL");
+       return NULL;
 }
 
 /*
- * find a connection based on transport and RxRPC connection ID for an incoming
- * packet
+ * Disconnect a call and clear any channel it occupies when that call
+ * terminates.  The caller must hold the channel_lock and must release the
+ * call's ref on the connection.
  */
-struct rxrpc_connection *rxrpc_find_connection(struct rxrpc_local *local,
-                                              struct rxrpc_peer *peer,
-                                              struct sk_buff *skb)
+void __rxrpc_disconnect_call(struct rxrpc_call *call)
 {
-       struct rxrpc_connection *conn;
-       struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
-       struct rb_node *p;
-       u32 epoch, cid;
-
-       _enter(",{%x,%x}", sp->hdr.cid, sp->hdr.flags);
+       struct rxrpc_connection *conn = call->conn;
+       struct rxrpc_channel *chan = &conn->channels[call->channel];
 
-       read_lock_bh(&peer->conn_lock);
+       _enter("%d,%d", conn->debug_id, call->channel);
 
-       cid     = sp->hdr.cid & RXRPC_CIDMASK;
-       epoch   = sp->hdr.epoch;
+       if (rcu_access_pointer(chan->call) == call) {
+               /* Save the result of the call so that we can repeat it if necessary
+                * through the channel, whilst disposing of the actual call record.
+                */
+               chan->last_result = call->local_abort;
+               smp_wmb();
+               chan->last_call = chan->call_id;
+               chan->call_id = chan->call_counter;
 
-       if (sp->hdr.flags & RXRPC_CLIENT_INITIATED) {
-               p = peer->service_conns.rb_node;
-               while (p) {
-                       conn = rb_entry(p, struct rxrpc_connection, service_node);
-
-                       _debug("maybe %x", conn->proto.cid);
-
-                       if (epoch < conn->proto.epoch)
-                               p = p->rb_left;
-                       else if (epoch > conn->proto.epoch)
-                               p = p->rb_right;
-                       else if (cid < conn->proto.cid)
-                               p = p->rb_left;
-                       else if (cid > conn->proto.cid)
-                               p = p->rb_right;
-                       else
-                               goto found;
-               }
-       } else {
-               conn = idr_find(&rxrpc_client_conn_ids, cid >> RXRPC_CIDSHIFT);
-               if (conn && conn->proto.epoch == epoch)
-                       goto found;
+               rcu_assign_pointer(chan->call, NULL);
+               atomic_inc(&conn->avail_chans);
+               wake_up(&conn->channel_wq);
        }
 
-       read_unlock_bh(&peer->conn_lock);
-       _leave(" = NULL");
-       return NULL;
-
-found:
-       rxrpc_get_connection(conn);
-       read_unlock_bh(&peer->conn_lock);
-       _leave(" = %p", conn);
-       return conn;
+       _leave("");
 }
 
 /*
@@ -531,15 +184,13 @@ found:
 void rxrpc_disconnect_call(struct rxrpc_call *call)
 {
        struct rxrpc_connection *conn = call->conn;
-       unsigned chan = call->channel;
 
-       _enter("%d,%d", conn->debug_id, call->channel);
+       spin_lock(&conn->channel_lock);
+       __rxrpc_disconnect_call(call);
+       spin_unlock(&conn->channel_lock);
 
-       if (conn->channels[chan] == call) {
-               rcu_assign_pointer(conn->channels[chan], NULL);
-               atomic_inc(&conn->avail_chans);
-               wake_up(&conn->channel_wq);
-       }
+       call->conn = NULL;
+       rxrpc_put_connection(conn);
 }
 
 /*
@@ -553,10 +204,10 @@ void rxrpc_put_connection(struct rxrpc_connection *conn)
        _enter("%p{u=%d,d=%d}",
               conn, atomic_read(&conn->usage), conn->debug_id);
 
-       ASSERTCMP(atomic_read(&conn->usage), >, 0);
+       ASSERTCMP(atomic_read(&conn->usage), >, 1);
 
        conn->put_time = ktime_get_seconds();
-       if (atomic_dec_and_test(&conn->usage)) {
+       if (atomic_dec_return(&conn->usage) == 1) {
                _debug("zombie");
                rxrpc_queue_delayed_work(&rxrpc_connection_reap, 0);
        }
@@ -567,15 +218,17 @@ void rxrpc_put_connection(struct rxrpc_connection *conn)
 /*
  * destroy a virtual connection
  */
-static void rxrpc_destroy_connection(struct rxrpc_connection *conn)
+static void rxrpc_destroy_connection(struct rcu_head *rcu)
 {
-       _enter("%p{%d}", conn, atomic_read(&conn->usage));
+       struct rxrpc_connection *conn =
+               container_of(rcu, struct rxrpc_connection, rcu);
+
+       _enter("{%d,u=%d}", conn->debug_id, atomic_read(&conn->usage));
 
        ASSERTCMP(atomic_read(&conn->usage), ==, 0);
 
        _net("DESTROY CONN %d", conn->debug_id);
 
-       ASSERT(RB_EMPTY_ROOT(&conn->calls));
        rxrpc_purge_queue(&conn->rx_queue);
 
        conn->security->clear(conn);
@@ -594,59 +247,41 @@ static void rxrpc_destroy_connection(struct rxrpc_connection *conn)
 static void rxrpc_connection_reaper(struct work_struct *work)
 {
        struct rxrpc_connection *conn, *_p;
-       struct rxrpc_peer *peer;
-       unsigned long now, earliest, reap_time;
+       unsigned long reap_older_than, earliest, put_time, now;
 
        LIST_HEAD(graveyard);
 
        _enter("");
 
        now = ktime_get_seconds();
+       reap_older_than =  now - rxrpc_connection_expiry;
        earliest = ULONG_MAX;
 
        write_lock(&rxrpc_connection_lock);
        list_for_each_entry_safe(conn, _p, &rxrpc_connections, link) {
-               _debug("reap CONN %d { u=%d,t=%ld }",
-                      conn->debug_id, atomic_read(&conn->usage),
-                      (long) now - (long) conn->put_time);
-
-               if (likely(atomic_read(&conn->usage) > 0))
+               ASSERTCMP(atomic_read(&conn->usage), >, 0);
+               if (likely(atomic_read(&conn->usage) > 1))
                        continue;
 
-               if (rxrpc_conn_is_client(conn)) {
-                       struct rxrpc_local *local = conn->params.local;
-                       spin_lock(&local->client_conns_lock);
-                       reap_time = conn->put_time + rxrpc_connection_expiry;
-
-                       if (atomic_read(&conn->usage) > 0) {
-                               ;
-                       } else if (reap_time <= now) {
-                               list_move_tail(&conn->link, &graveyard);
-                               rxrpc_put_client_connection_id(conn);
-                               rb_erase(&conn->client_node,
-                                        &local->client_conns);
-                       } else if (reap_time < earliest) {
-                               earliest = reap_time;
-                       }
-
-                       spin_unlock(&local->client_conns_lock);
-               } else {
-                       peer = conn->params.peer;
-                       write_lock_bh(&peer->conn_lock);
-                       reap_time = conn->put_time + rxrpc_connection_expiry;
-
-                       if (atomic_read(&conn->usage) > 0) {
-                               ;
-                       } else if (reap_time <= now) {
-                               list_move_tail(&conn->link, &graveyard);
-                               rb_erase(&conn->service_node,
-                                        &peer->service_conns);
-                       } else if (reap_time < earliest) {
-                               earliest = reap_time;
-                       }
-
-                       write_unlock_bh(&peer->conn_lock);
+               put_time = READ_ONCE(conn->put_time);
+               if (time_after(put_time, reap_older_than)) {
+                       if (time_before(put_time, earliest))
+                               earliest = put_time;
+                       continue;
                }
+
+               /* The usage count sits at 1 whilst the object is unused on the
+                * list; we reduce that to 0 to make the object unavailable.
+                */
+               if (atomic_cmpxchg(&conn->usage, 1, 0) != 1)
+                       continue;
+
+               if (rxrpc_conn_is_client(conn))
+                       rxrpc_unpublish_client_conn(conn);
+               else
+                       rxrpc_unpublish_service_conn(conn);
+
+               list_move_tail(&conn->link, &graveyard);
        }
        write_unlock(&rxrpc_connection_lock);
 
@@ -657,14 +292,14 @@ static void rxrpc_connection_reaper(struct work_struct *work)
                                         (earliest - now) * HZ);
        }
 
-       /* then destroy all those pulled out */
        while (!list_empty(&graveyard)) {
                conn = list_entry(graveyard.next, struct rxrpc_connection,
                                  link);
                list_del_init(&conn->link);
 
                ASSERTCMP(atomic_read(&conn->usage), ==, 0);
-               rxrpc_destroy_connection(conn);
+               skb_queue_purge(&conn->rx_queue);
+               call_rcu(&conn->rcu, rxrpc_destroy_connection);
        }
 
        _leave("");
@@ -676,11 +311,30 @@ static void rxrpc_connection_reaper(struct work_struct *work)
  */
 void __exit rxrpc_destroy_all_connections(void)
 {
+       struct rxrpc_connection *conn, *_p;
+       bool leak = false;
+
        _enter("");
 
        rxrpc_connection_expiry = 0;
        cancel_delayed_work(&rxrpc_connection_reap);
        rxrpc_queue_delayed_work(&rxrpc_connection_reap, 0);
+       flush_workqueue(rxrpc_workqueue);
+
+       write_lock(&rxrpc_connection_lock);
+       list_for_each_entry_safe(conn, _p, &rxrpc_connections, link) {
+               pr_err("AF_RXRPC: Leaked conn %p {%d}\n",
+                      conn, atomic_read(&conn->usage));
+               leak = true;
+       }
+       write_unlock(&rxrpc_connection_lock);
+       BUG_ON(leak);
+
+       /* Make sure the local and peer records pinned by any dying connections
+        * are released.
+        */
+       rcu_barrier();
+       rxrpc_destroy_client_conn_ids();
 
        _leave("");
 }
diff --git a/net/rxrpc/conn_service.c b/net/rxrpc/conn_service.c
new file mode 100644 (file)
index 0000000..fd9027c
--- /dev/null
@@ -0,0 +1,230 @@
+/* Service connection management
+ *
+ * Copyright (C) 2016 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#include <linux/slab.h>
+#include "ar-internal.h"
+
+/*
+ * Find a service connection under RCU conditions.
+ *
+ * We could use a hash table, but that is subject to bucket stuffing by an
+ * attacker as the client gets to pick the epoch and cid values and would know
+ * the hash function.  So, instead, we use a hash table for the peer and from
+ * that an rbtree to find the service connection.  Under ordinary circumstances
+ * it might be slower than a large hash table, but it is at least limited in
+ * depth.
+ */
+struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *peer,
+                                                    struct sk_buff *skb)
+{
+       struct rxrpc_connection *conn = NULL;
+       struct rxrpc_conn_proto k;
+       struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+       struct rb_node *p;
+       unsigned int seq = 0;
+
+       k.epoch = sp->hdr.epoch;
+       k.cid   = sp->hdr.cid & RXRPC_CIDMASK;
+
+       do {
+               /* Unfortunately, rbtree walking doesn't give reliable results
+                * under just the RCU read lock, so we have to check for
+                * changes.
+                */
+               read_seqbegin_or_lock(&peer->service_conn_lock, &seq);
+
+               p = rcu_dereference_raw(peer->service_conns.rb_node);
+               while (p) {
+                       conn = rb_entry(p, struct rxrpc_connection, service_node);
+
+                       if (conn->proto.index_key < k.index_key)
+                               p = rcu_dereference_raw(p->rb_left);
+                       else if (conn->proto.index_key > k.index_key)
+                               p = rcu_dereference_raw(p->rb_right);
+                       else
+                               goto done;
+                       conn = NULL;
+               }
+       } while (need_seqretry(&peer->service_conn_lock, seq));
+
+done:
+       done_seqretry(&peer->service_conn_lock, seq);
+       _leave(" = %d", conn ? conn->debug_id : -1);
+       return conn;
+}
+
+/*
+ * Insert a service connection into a peer's tree, thereby making it a target
+ * for incoming packets.
+ */
+static struct rxrpc_connection *
+rxrpc_publish_service_conn(struct rxrpc_peer *peer,
+                          struct rxrpc_connection *conn)
+{
+       struct rxrpc_connection *cursor = NULL;
+       struct rxrpc_conn_proto k = conn->proto;
+       struct rb_node **pp, *parent;
+
+       write_seqlock_bh(&peer->service_conn_lock);
+
+       pp = &peer->service_conns.rb_node;
+       parent = NULL;
+       while (*pp) {
+               parent = *pp;
+               cursor = rb_entry(parent,
+                                 struct rxrpc_connection, service_node);
+
+               if (cursor->proto.index_key < k.index_key)
+                       pp = &(*pp)->rb_left;
+               else if (cursor->proto.index_key > k.index_key)
+                       pp = &(*pp)->rb_right;
+               else
+                       goto found_extant_conn;
+       }
+
+       rb_link_node_rcu(&conn->service_node, parent, pp);
+       rb_insert_color(&conn->service_node, &peer->service_conns);
+conn_published:
+       set_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags);
+       write_sequnlock_bh(&peer->service_conn_lock);
+       _leave(" = %d [new]", conn->debug_id);
+       return conn;
+
+found_extant_conn:
+       if (atomic_read(&cursor->usage) == 0)
+               goto replace_old_connection;
+       write_sequnlock_bh(&peer->service_conn_lock);
+       /* We should not be able to get here.  rxrpc_incoming_connection() is
+        * called in a non-reentrant context, so there can't be a race to
+        * insert a new connection.
+        */
+       BUG();
+
+replace_old_connection:
+       /* The old connection is from an outdated epoch. */
+       _debug("replace conn");
+       rb_replace_node_rcu(&cursor->service_node,
+                           &conn->service_node,
+                           &peer->service_conns);
+       clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &cursor->flags);
+       goto conn_published;
+}
+
+/*
+ * get a record of an incoming connection
+ */
+struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *local,
+                                                  struct sockaddr_rxrpc *srx,
+                                                  struct sk_buff *skb)
+{
+       struct rxrpc_connection *conn;
+       struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+       struct rxrpc_peer *peer;
+       const char *new = "old";
+
+       _enter("");
+
+       peer = rxrpc_lookup_peer(local, srx, GFP_NOIO);
+       if (!peer) {
+               _debug("no peer");
+               return ERR_PTR(-EBUSY);
+       }
+
+       ASSERT(sp->hdr.flags & RXRPC_CLIENT_INITIATED);
+
+       rcu_read_lock();
+       peer = rxrpc_lookup_peer_rcu(local, srx);
+       if (peer) {
+               conn = rxrpc_find_service_conn_rcu(peer, skb);
+               if (conn) {
+                       if (sp->hdr.securityIndex != conn->security_ix)
+                               goto security_mismatch_rcu;
+                       if (rxrpc_get_connection_maybe(conn))
+                               goto found_extant_connection_rcu;
+
+                       /* The conn has expired but we can't remove it without
+                        * the appropriate lock, so we attempt to replace it
+                        * when we have a new candidate.
+                        */
+               }
+
+               if (!rxrpc_get_peer_maybe(peer))
+                       peer = NULL;
+       }
+       rcu_read_unlock();
+
+       if (!peer) {
+               peer = rxrpc_lookup_peer(local, srx, GFP_NOIO);
+               if (!peer)
+                       goto enomem;
+       }
+
+       /* We don't have a matching record yet. */
+       conn = rxrpc_alloc_connection(GFP_NOIO);
+       if (!conn)
+               goto enomem_peer;
+
+       conn->proto.epoch       = sp->hdr.epoch;
+       conn->proto.cid         = sp->hdr.cid & RXRPC_CIDMASK;
+       conn->params.local      = local;
+       conn->params.peer       = peer;
+       conn->params.service_id = sp->hdr.serviceId;
+       conn->security_ix       = sp->hdr.securityIndex;
+       conn->out_clientflag    = 0;
+       conn->state             = RXRPC_CONN_SERVICE;
+       if (conn->params.service_id)
+               conn->state     = RXRPC_CONN_SERVICE_UNSECURED;
+
+       rxrpc_get_local(local);
+
+       write_lock(&rxrpc_connection_lock);
+       list_add_tail(&conn->link, &rxrpc_connections);
+       write_unlock(&rxrpc_connection_lock);
+
+       /* Make the connection a target for incoming packets. */
+       rxrpc_publish_service_conn(peer, conn);
+
+       new = "new";
+
+success:
+       _net("CONNECTION %s %d {%x}", new, conn->debug_id, conn->proto.cid);
+       _leave(" = %p {u=%d}", conn, atomic_read(&conn->usage));
+       return conn;
+
+found_extant_connection_rcu:
+       rcu_read_unlock();
+       goto success;
+
+security_mismatch_rcu:
+       rcu_read_unlock();
+       _leave(" = -EKEYREJECTED");
+       return ERR_PTR(-EKEYREJECTED);
+
+enomem_peer:
+       rxrpc_put_peer(peer);
+enomem:
+       _leave(" = -ENOMEM");
+       return ERR_PTR(-ENOMEM);
+}
+
+/*
+ * Remove the service connection from the peer's tree, thereby removing it as a
+ * target for incoming packets.
+ */
+void rxrpc_unpublish_service_conn(struct rxrpc_connection *conn)
+{
+       struct rxrpc_peer *peer = conn->params.peer;
+
+       write_seqlock_bh(&peer->service_conn_lock);
+       if (test_and_clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags))
+               rb_erase(&conn->service_node, &peer->service_conns);
+       write_sequnlock_bh(&peer->service_conn_lock);
+}
index f4bd57b..991a20d 100644 (file)
@@ -476,7 +476,7 @@ static void rxrpc_process_jumbo_packet(struct rxrpc_call *call,
                sp->hdr.seq     += 1;
                sp->hdr.serial  += 1;
                sp->hdr.flags   = jhdr.flags;
-               sp->hdr._rsvd   = jhdr._rsvd;
+               sp->hdr._rsvd   = ntohs(jhdr._rsvd);
 
                _proto("Rx DATA Jumbo %%%u", sp->hdr.serial - 1);
 
@@ -575,14 +575,13 @@ done:
  * post connection-level events to the connection
  * - this includes challenges, responses and some aborts
  */
-static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn,
+static bool rxrpc_post_packet_to_conn(struct rxrpc_connection *conn,
                                      struct sk_buff *skb)
 {
        _enter("%p,%p", conn, skb);
 
-       rxrpc_get_connection(conn);
        skb_queue_tail(&conn->rx_queue, skb);
-       rxrpc_queue_conn(conn);
+       return rxrpc_queue_conn(conn);
 }
 
 /*
@@ -595,7 +594,7 @@ static void rxrpc_post_packet_to_local(struct rxrpc_local *local,
        _enter("%p,%p", local, skb);
 
        skb_queue_tail(&local->event_queue, skb);
-       rxrpc_queue_work(&local->processor);
+       rxrpc_queue_local(local);
 }
 
 /*
@@ -627,32 +626,6 @@ int rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb)
        return 0;
 }
 
-static struct rxrpc_connection *rxrpc_conn_from_local(struct rxrpc_local *local,
-                                                     struct sk_buff *skb)
-{
-       struct rxrpc_peer *peer;
-       struct rxrpc_connection *conn;
-       struct sockaddr_rxrpc srx;
-
-       rxrpc_get_addr_from_skb(local, skb, &srx);
-       rcu_read_lock();
-       peer = rxrpc_lookup_peer_rcu(local, &srx);
-       if (!peer)
-               goto cant_find_peer;
-
-       conn = rxrpc_find_connection(local, peer, skb);
-       rcu_read_unlock();
-       if (!conn)
-               goto cant_find_conn;
-
-       return conn;
-
-cant_find_peer:
-       rcu_read_unlock();
-cant_find_conn:
-       return NULL;
-}
-
 /*
  * handle data received on the local endpoint
  * - may be called in interrupt context
@@ -663,6 +636,7 @@ cant_find_conn:
  */
 void rxrpc_data_ready(struct sock *sk)
 {
+       struct rxrpc_connection *conn;
        struct rxrpc_skb_priv *sp;
        struct rxrpc_local *local = sk->sk_user_data;
        struct sk_buff *skb;
@@ -726,34 +700,37 @@ void rxrpc_data_ready(struct sock *sk)
            (sp->hdr.callNumber == 0 || sp->hdr.seq == 0))
                goto bad_message;
 
-       if (sp->hdr.callNumber == 0) {
-               /* This is a connection-level packet. These should be
-                * fairly rare, so the extra overhead of looking them up the
-                * old-fashioned way doesn't really hurt */
-               struct rxrpc_connection *conn;
+       rcu_read_lock();
 
-               conn = rxrpc_conn_from_local(local, skb);
-               if (!conn)
-                       goto cant_route_call;
+retry_find_conn:
+       conn = rxrpc_find_connection_rcu(local, skb);
+       if (!conn)
+               goto cant_route_call;
 
+       if (sp->hdr.callNumber == 0) {
+               /* Connection-level packet */
                _debug("CONN %p {%d}", conn, conn->debug_id);
-               rxrpc_post_packet_to_conn(conn, skb);
-               rxrpc_put_connection(conn);
+               if (!rxrpc_post_packet_to_conn(conn, skb))
+                       goto retry_find_conn;
        } else {
-               struct rxrpc_call *call;
+               /* Call-bound packets are routed by connection channel. */
+               unsigned int channel = sp->hdr.cid & RXRPC_CHANNELMASK;
+               struct rxrpc_channel *chan = &conn->channels[channel];
+               struct rxrpc_call *call = rcu_dereference(chan->call);
 
-               call = rxrpc_find_call_hash(&sp->hdr, local,
-                                           AF_INET, &ip_hdr(skb)->saddr);
-               if (call)
-                       rxrpc_post_packet_to_call(call, skb);
-               else
+               if (!call || atomic_read(&call->usage) == 0)
                        goto cant_route_call;
+
+               rxrpc_post_packet_to_call(call, skb);
        }
 
+       rcu_read_unlock();
 out:
        return;
 
 cant_route_call:
+       rcu_read_unlock();
+
        _debug("can't route call");
        if (sp->hdr.flags & RXRPC_CLIENT_INITIATED &&
            sp->hdr.type == RXRPC_PACKET_TYPE_DATA) {
index e571403..c21ad21 100644 (file)
@@ -17,11 +17,12 @@ static int none_init_connection_security(struct rxrpc_connection *conn)
        return 0;
 }
 
-static void none_prime_packet_security(struct rxrpc_connection *conn)
+static int none_prime_packet_security(struct rxrpc_connection *conn)
 {
+       return 0;
 }
 
-static int none_secure_packet(const struct rxrpc_call *call,
+static int none_secure_packet(struct rxrpc_call *call,
                               struct sk_buff *skb,
                               size_t data_size,
                               void *sechdr)
@@ -29,7 +30,7 @@ static int none_secure_packet(const struct rxrpc_call *call,
        return 0;
 }
 
-static int none_verify_packet(const struct rxrpc_call *call,
+static int none_verify_packet(struct rxrpc_call *call,
                               struct sk_buff *skb,
                               u32 *_abort_code)
 {
index 3ab7764..a753796 100644 (file)
@@ -374,14 +374,17 @@ void __exit rxrpc_destroy_all_locals(void)
 
        _enter("");
 
-       if (list_empty(&rxrpc_local_endpoints))
-               return;
+       flush_workqueue(rxrpc_workqueue);
 
-       mutex_lock(&rxrpc_local_mutex);
-       list_for_each_entry(local, &rxrpc_local_endpoints, link) {
-               pr_err("AF_RXRPC: Leaked local %p {%d}\n",
-                      local, atomic_read(&local->usage));
+       if (!list_empty(&rxrpc_local_endpoints)) {
+               mutex_lock(&rxrpc_local_mutex);
+               list_for_each_entry(local, &rxrpc_local_endpoints, link) {
+                       pr_err("AF_RXRPC: Leaked local %p {%d}\n",
+                              local, atomic_read(&local->usage));
+               }
+               mutex_unlock(&rxrpc_local_mutex);
+               BUG();
        }
-       mutex_unlock(&rxrpc_local_mutex);
-       BUG();
+
+       rcu_barrier();
 }
index 01d4930..538e983 100644 (file)
@@ -189,7 +189,7 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
                INIT_WORK(&peer->error_distributor,
                          &rxrpc_peer_error_distributor);
                peer->service_conns = RB_ROOT;
-               rwlock_init(&peer->conn_lock);
+               seqlock_init(&peer->service_conn_lock);
                spin_lock_init(&peer->lock);
                peer->debug_id = atomic_inc_return(&rxrpc_debug_id);
        }
index 500cdcd..ced5f07 100644 (file)
 #include <net/af_rxrpc.h>
 #include "ar-internal.h"
 
-static const char *const rxrpc_conn_states[] = {
-       [RXRPC_CONN_UNUSED]             = "Unused  ",
-       [RXRPC_CONN_CLIENT]             = "Client  ",
-       [RXRPC_CONN_SERVER_UNSECURED]   = "SvUnsec ",
-       [RXRPC_CONN_SERVER_CHALLENGING] = "SvChall ",
-       [RXRPC_CONN_SERVER]             = "SvSecure",
-       [RXRPC_CONN_REMOTELY_ABORTED]   = "RmtAbort",
-       [RXRPC_CONN_LOCALLY_ABORTED]    = "LocAbort",
-       [RXRPC_CONN_NETWORK_ERROR]      = "NetError",
+static const char *const rxrpc_conn_states[RXRPC_CONN__NR_STATES] = {
+       [RXRPC_CONN_UNUSED]                     = "Unused  ",
+       [RXRPC_CONN_CLIENT]                     = "Client  ",
+       [RXRPC_CONN_SERVICE_UNSECURED]          = "SvUnsec ",
+       [RXRPC_CONN_SERVICE_CHALLENGING]        = "SvChall ",
+       [RXRPC_CONN_SERVICE]                    = "SvSecure",
+       [RXRPC_CONN_REMOTELY_ABORTED]           = "RmtAbort",
+       [RXRPC_CONN_LOCALLY_ABORTED]            = "LocAbort",
+       [RXRPC_CONN_NETWORK_ERROR]              = "NetError",
 };
 
 /*
@@ -137,7 +137,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
        if (v == &rxrpc_connections) {
                seq_puts(seq,
                         "Proto Local                  Remote                "
-                        " SvID ConnID   Calls    End Use State    Key     "
+                        " SvID ConnID   End Use State    Key     "
                         " Serial   ISerial\n"
                         );
                return 0;
@@ -154,13 +154,12 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
                ntohs(conn->params.peer->srx.transport.sin.sin_port));
 
        seq_printf(seq,
-                  "UDP   %-22.22s %-22.22s %4x %08x %08x %s %3u"
+                  "UDP   %-22.22s %-22.22s %4x %08x %s %3u"
                   " %s %08x %08x %08x\n",
                   lbuff,
                   rbuff,
                   conn->params.service_id,
                   conn->proto.cid,
-                  conn->call_counter,
                   rxrpc_conn_is_service(conn) ? "Svc" : "Clt",
                   atomic_read(&conn->usage),
                   rxrpc_conn_states[conn->state],
index 23c05ec..63afa9e 100644 (file)
@@ -103,43 +103,43 @@ error:
  * prime the encryption state with the invariant parts of a connection's
  * description
  */
-static void rxkad_prime_packet_security(struct rxrpc_connection *conn)
+static int rxkad_prime_packet_security(struct rxrpc_connection *conn)
 {
        struct rxrpc_key_token *token;
        SKCIPHER_REQUEST_ON_STACK(req, conn->cipher);
-       struct scatterlist sg[2];
+       struct scatterlist sg;
        struct rxrpc_crypt iv;
-       struct {
-               __be32 x[4];
-       } tmpbuf __attribute__((aligned(16))); /* must all be in same page */
+       __be32 *tmpbuf;
+       size_t tmpsize = 4 * sizeof(__be32);
 
        _enter("");
 
        if (!conn->params.key)
-               return;
+               return 0;
+
+       tmpbuf = kmalloc(tmpsize, GFP_KERNEL);
+       if (!tmpbuf)
+               return -ENOMEM;
 
        token = conn->params.key->payload.data[0];
        memcpy(&iv, token->kad->session_key, sizeof(iv));
 
-       tmpbuf.x[0] = htonl(conn->proto.epoch);
-       tmpbuf.x[1] = htonl(conn->proto.cid);
-       tmpbuf.x[2] = 0;
-       tmpbuf.x[3] = htonl(conn->security_ix);
-
-       sg_init_one(&sg[0], &tmpbuf, sizeof(tmpbuf));
-       sg_init_one(&sg[1], &tmpbuf, sizeof(tmpbuf));
+       tmpbuf[0] = htonl(conn->proto.epoch);
+       tmpbuf[1] = htonl(conn->proto.cid);
+       tmpbuf[2] = 0;
+       tmpbuf[3] = htonl(conn->security_ix);
 
+       sg_init_one(&sg, tmpbuf, tmpsize);
        skcipher_request_set_tfm(req, conn->cipher);
        skcipher_request_set_callback(req, 0, NULL, NULL);
-       skcipher_request_set_crypt(req, &sg[1], &sg[0], sizeof(tmpbuf), iv.x);
-
+       skcipher_request_set_crypt(req, &sg, &sg, tmpsize, iv.x);
        crypto_skcipher_encrypt(req);
        skcipher_request_zero(req);
 
-       memcpy(&conn->csum_iv, &tmpbuf.x[2], sizeof(conn->csum_iv));
-       ASSERTCMP((u32 __force)conn->csum_iv.n[0], ==, (u32 __force)tmpbuf.x[2]);
-
-       _leave("");
+       memcpy(&conn->csum_iv, tmpbuf + 2, sizeof(conn->csum_iv));
+       kfree(tmpbuf);
+       _leave(" = 0");
+       return 0;
 }
 
 /*
@@ -152,12 +152,9 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
 {
        struct rxrpc_skb_priv *sp;
        SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
+       struct rxkad_level1_hdr hdr;
        struct rxrpc_crypt iv;
-       struct scatterlist sg[2];
-       struct {
-               struct rxkad_level1_hdr hdr;
-               __be32  first;  /* first four bytes of data and padding */
-       } tmpbuf __attribute__((aligned(8))); /* must all be in same page */
+       struct scatterlist sg;
        u16 check;
 
        sp = rxrpc_skb(skb);
@@ -167,24 +164,19 @@ static int rxkad_secure_packet_auth(const struct rxrpc_call *call,
        check = sp->hdr.seq ^ sp->hdr.callNumber;
        data_size |= (u32)check << 16;
 
-       tmpbuf.hdr.data_size = htonl(data_size);
-       memcpy(&tmpbuf.first, sechdr + 4, sizeof(tmpbuf.first));
+       hdr.data_size = htonl(data_size);
+       memcpy(sechdr, &hdr, sizeof(hdr));
 
        /* start the encryption afresh */
        memset(&iv, 0, sizeof(iv));
 
-       sg_init_one(&sg[0], &tmpbuf, sizeof(tmpbuf));
-       sg_init_one(&sg[1], &tmpbuf, sizeof(tmpbuf));
-
+       sg_init_one(&sg, sechdr, 8);
        skcipher_request_set_tfm(req, call->conn->cipher);
        skcipher_request_set_callback(req, 0, NULL, NULL);
-       skcipher_request_set_crypt(req, &sg[1], &sg[0], sizeof(tmpbuf), iv.x);
-
+       skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x);
        crypto_skcipher_encrypt(req);
        skcipher_request_zero(req);
 
-       memcpy(sechdr, &tmpbuf, sizeof(tmpbuf));
-
        _leave(" = 0");
        return 0;
 }
@@ -198,8 +190,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
                                       void *sechdr)
 {
        const struct rxrpc_key_token *token;
-       struct rxkad_level2_hdr rxkhdr
-               __attribute__((aligned(8))); /* must be all on one page */
+       struct rxkad_level2_hdr rxkhdr;
        struct rxrpc_skb_priv *sp;
        SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
        struct rxrpc_crypt iv;
@@ -218,18 +209,16 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
 
        rxkhdr.data_size = htonl(data_size | (u32)check << 16);
        rxkhdr.checksum = 0;
+       memcpy(sechdr, &rxkhdr, sizeof(rxkhdr));
 
        /* encrypt from the session key */
        token = call->conn->params.key->payload.data[0];
        memcpy(&iv, token->kad->session_key, sizeof(iv));
 
        sg_init_one(&sg[0], sechdr, sizeof(rxkhdr));
-       sg_init_one(&sg[1], &rxkhdr, sizeof(rxkhdr));
-
        skcipher_request_set_tfm(req, call->conn->cipher);
        skcipher_request_set_callback(req, 0, NULL, NULL);
-       skcipher_request_set_crypt(req, &sg[1], &sg[0], sizeof(rxkhdr), iv.x);
-
+       skcipher_request_set_crypt(req, &sg[0], &sg[0], sizeof(rxkhdr), iv.x);
        crypto_skcipher_encrypt(req);
 
        /* we want to encrypt the skbuff in-place */
@@ -243,9 +232,7 @@ static int rxkad_secure_packet_encrypt(const struct rxrpc_call *call,
 
        sg_init_table(sg, nsg);
        skb_to_sgvec(skb, sg, 0, len);
-
        skcipher_request_set_crypt(req, sg, sg, len, iv.x);
-
        crypto_skcipher_encrypt(req);
 
        _leave(" = 0");
@@ -259,7 +246,7 @@ out:
 /*
  * checksum an RxRPC packet header
  */
-static int rxkad_secure_packet(const struct rxrpc_call *call,
+static int rxkad_secure_packet(struct rxrpc_call *call,
                               struct sk_buff *skb,
                               size_t data_size,
                               void *sechdr)
@@ -267,10 +254,7 @@ static int rxkad_secure_packet(const struct rxrpc_call *call,
        struct rxrpc_skb_priv *sp;
        SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
        struct rxrpc_crypt iv;
-       struct scatterlist sg[2];
-       struct {
-               __be32 x[2];
-       } tmpbuf __attribute__((aligned(8))); /* must all be in same page */
+       struct scatterlist sg;
        u32 x, y;
        int ret;
 
@@ -293,20 +277,17 @@ static int rxkad_secure_packet(const struct rxrpc_call *call,
        /* calculate the security checksum */
        x = call->channel << (32 - RXRPC_CIDSHIFT);
        x |= sp->hdr.seq & 0x3fffffff;
-       tmpbuf.x[0] = htonl(sp->hdr.callNumber);
-       tmpbuf.x[1] = htonl(x);
-
-       sg_init_one(&sg[0], &tmpbuf, sizeof(tmpbuf));
-       sg_init_one(&sg[1], &tmpbuf, sizeof(tmpbuf));
+       call->crypto_buf[0] = htonl(sp->hdr.callNumber);
+       call->crypto_buf[1] = htonl(x);
 
+       sg_init_one(&sg, call->crypto_buf, 8);
        skcipher_request_set_tfm(req, call->conn->cipher);
        skcipher_request_set_callback(req, 0, NULL, NULL);
-       skcipher_request_set_crypt(req, &sg[1], &sg[0], sizeof(tmpbuf), iv.x);
-
+       skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x);
        crypto_skcipher_encrypt(req);
        skcipher_request_zero(req);
 
-       y = ntohl(tmpbuf.x[1]);
+       y = ntohl(call->crypto_buf[1]);
        y = (y >> 16) & 0xffff;
        if (y == 0)
                y = 1; /* zero checksums are not permitted */
@@ -367,7 +348,6 @@ static int rxkad_verify_packet_auth(const struct rxrpc_call *call,
        skcipher_request_set_tfm(req, call->conn->cipher);
        skcipher_request_set_callback(req, 0, NULL, NULL);
        skcipher_request_set_crypt(req, sg, sg, 8, iv.x);
-
        crypto_skcipher_decrypt(req);
        skcipher_request_zero(req);
 
@@ -452,7 +432,6 @@ static int rxkad_verify_packet_encrypt(const struct rxrpc_call *call,
        skcipher_request_set_tfm(req, call->conn->cipher);
        skcipher_request_set_callback(req, 0, NULL, NULL);
        skcipher_request_set_crypt(req, sg, sg, skb->len, iv.x);
-
        crypto_skcipher_decrypt(req);
        skcipher_request_zero(req);
        if (sg != _sg)
@@ -498,17 +477,14 @@ nomem:
 /*
  * verify the security on a received packet
  */
-static int rxkad_verify_packet(const struct rxrpc_call *call,
+static int rxkad_verify_packet(struct rxrpc_call *call,
                               struct sk_buff *skb,
                               u32 *_abort_code)
 {
        SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
        struct rxrpc_skb_priv *sp;
        struct rxrpc_crypt iv;
-       struct scatterlist sg[2];
-       struct {
-               __be32 x[2];
-       } tmpbuf __attribute__((aligned(8))); /* must all be in same page */
+       struct scatterlist sg;
        u16 cksum;
        u32 x, y;
        int ret;
@@ -533,20 +509,17 @@ static int rxkad_verify_packet(const struct rxrpc_call *call,
        /* validate the security checksum */
        x = call->channel << (32 - RXRPC_CIDSHIFT);
        x |= sp->hdr.seq & 0x3fffffff;
-       tmpbuf.x[0] = htonl(call->call_id);
-       tmpbuf.x[1] = htonl(x);
-
-       sg_init_one(&sg[0], &tmpbuf, sizeof(tmpbuf));
-       sg_init_one(&sg[1], &tmpbuf, sizeof(tmpbuf));
+       call->crypto_buf[0] = htonl(call->call_id);
+       call->crypto_buf[1] = htonl(x);
 
+       sg_init_one(&sg, call->crypto_buf, 8);
        skcipher_request_set_tfm(req, call->conn->cipher);
        skcipher_request_set_callback(req, 0, NULL, NULL);
-       skcipher_request_set_crypt(req, &sg[1], &sg[0], sizeof(tmpbuf), iv.x);
-
+       skcipher_request_set_crypt(req, &sg, &sg, 8, iv.x);
        crypto_skcipher_encrypt(req);
        skcipher_request_zero(req);
 
-       y = ntohl(tmpbuf.x[1]);
+       y = ntohl(call->crypto_buf[1]);
        cksum = (y >> 16) & 0xffff;
        if (cksum == 0)
                cksum = 1; /* zero checksums are not permitted */
@@ -709,29 +682,6 @@ static void rxkad_calc_response_checksum(struct rxkad_response *response)
        response->encrypted.checksum = htonl(csum);
 }
 
-/*
- * load a scatterlist with a potentially split-page buffer
- */
-static void rxkad_sg_set_buf2(struct scatterlist sg[2],
-                             void *buf, size_t buflen)
-{
-       int nsg = 1;
-
-       sg_init_table(sg, 2);
-
-       sg_set_buf(&sg[0], buf, buflen);
-       if (sg[0].offset + buflen > PAGE_SIZE) {
-               /* the buffer was split over two pages */
-               sg[0].length = PAGE_SIZE - sg[0].offset;
-               sg_set_buf(&sg[1], buf + sg[0].length, buflen - sg[0].length);
-               nsg++;
-       }
-
-       sg_mark_end(&sg[nsg - 1]);
-
-       ASSERTCMP(sg[0].length + sg[1].length, ==, buflen);
-}
-
 /*
  * encrypt the response packet
  */
@@ -741,17 +691,16 @@ static void rxkad_encrypt_response(struct rxrpc_connection *conn,
 {
        SKCIPHER_REQUEST_ON_STACK(req, conn->cipher);
        struct rxrpc_crypt iv;
-       struct scatterlist sg[2];
+       struct scatterlist sg[1];
 
        /* continue encrypting from where we left off */
        memcpy(&iv, s2->session_key, sizeof(iv));
 
-       rxkad_sg_set_buf2(sg, &resp->encrypted, sizeof(resp->encrypted));
-
+       sg_init_table(sg, 1);
+       sg_set_buf(sg, &resp->encrypted, sizeof(resp->encrypted));
        skcipher_request_set_tfm(req, conn->cipher);
        skcipher_request_set_callback(req, 0, NULL, NULL);
        skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x);
-
        crypto_skcipher_encrypt(req);
        skcipher_request_zero(req);
 }
@@ -818,14 +767,10 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn,
        resp.kvno                       = htonl(token->kad->kvno);
        resp.ticket_len                 = htonl(token->kad->ticket_len);
 
-       resp.encrypted.call_id[0] =
-               htonl(conn->channels[0] ? conn->channels[0]->call_id : 0);
-       resp.encrypted.call_id[1] =
-               htonl(conn->channels[1] ? conn->channels[1]->call_id : 0);
-       resp.encrypted.call_id[2] =
-               htonl(conn->channels[2] ? conn->channels[2]->call_id : 0);
-       resp.encrypted.call_id[3] =
-               htonl(conn->channels[3] ? conn->channels[3]->call_id : 0);
+       resp.encrypted.call_id[0] = htonl(conn->channels[0].call_counter);
+       resp.encrypted.call_id[1] = htonl(conn->channels[1].call_counter);
+       resp.encrypted.call_id[2] = htonl(conn->channels[2].call_counter);
+       resp.encrypted.call_id[3] = htonl(conn->channels[3].call_counter);
 
        /* calculate the response checksum and then do the encryption */
        rxkad_calc_response_checksum(&resp);
@@ -887,10 +832,8 @@ static int rxkad_decrypt_ticket(struct rxrpc_connection *conn,
        }
 
        sg_init_one(&sg[0], ticket, ticket_len);
-
        skcipher_request_set_callback(req, 0, NULL, NULL);
        skcipher_request_set_crypt(req, sg, sg, ticket_len, iv.x);
-
        crypto_skcipher_decrypt(req);
        skcipher_request_free(req);
 
@@ -1001,7 +944,7 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn,
                                   const struct rxrpc_crypt *session_key)
 {
        SKCIPHER_REQUEST_ON_STACK(req, rxkad_ci);
-       struct scatterlist sg[2];
+       struct scatterlist sg[1];
        struct rxrpc_crypt iv;
 
        _enter(",,%08x%08x",
@@ -1016,12 +959,11 @@ static void rxkad_decrypt_response(struct rxrpc_connection *conn,
 
        memcpy(&iv, session_key, sizeof(iv));
 
-       rxkad_sg_set_buf2(sg, &resp->encrypted, sizeof(resp->encrypted));
-
+       sg_init_table(sg, 1);
+       sg_set_buf(sg, &resp->encrypted, sizeof(resp->encrypted));
        skcipher_request_set_tfm(req, rxkad_ci);
        skcipher_request_set_callback(req, 0, NULL, NULL);
        skcipher_request_set_crypt(req, sg, sg, sizeof(resp->encrypted), iv.x);
-
        crypto_skcipher_decrypt(req);
        skcipher_request_zero(req);
 
@@ -1045,7 +987,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
        void *ticket;
        u32 abort_code, version, kvno, ticket_len, level;
        __be32 csum;
-       int ret;
+       int ret, i;
 
        _enter("{%d,%x}", conn->debug_id, key_serial(conn->server_key));
 
@@ -1108,11 +1050,26 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
        if (response.encrypted.checksum != csum)
                goto protocol_error_free;
 
-       if (ntohl(response.encrypted.call_id[0]) > INT_MAX ||
-           ntohl(response.encrypted.call_id[1]) > INT_MAX ||
-           ntohl(response.encrypted.call_id[2]) > INT_MAX ||
-           ntohl(response.encrypted.call_id[3]) > INT_MAX)
-               goto protocol_error_free;
+       spin_lock(&conn->channel_lock);
+       for (i = 0; i < RXRPC_MAXCALLS; i++) {
+               struct rxrpc_call *call;
+               u32 call_id = ntohl(response.encrypted.call_id[i]);
+
+               if (call_id > INT_MAX)
+                       goto protocol_error_unlock;
+
+               if (call_id < conn->channels[i].call_counter)
+                       goto protocol_error_unlock;
+               if (call_id > conn->channels[i].call_counter) {
+                       call = rcu_dereference_protected(
+                               conn->channels[i].call,
+                               lockdep_is_held(&conn->channel_lock));
+                       if (call && call->state < RXRPC_CALL_COMPLETE)
+                               goto protocol_error_unlock;
+                       conn->channels[i].call_counter = call_id;
+               }
+       }
+       spin_unlock(&conn->channel_lock);
 
        abort_code = RXKADOUTOFSEQUENCE;
        if (ntohl(response.encrypted.inc_nonce) != conn->security_nonce + 1)
@@ -1137,6 +1094,8 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
        _leave(" = 0");
        return 0;
 
+protocol_error_unlock:
+       spin_unlock(&conn->channel_lock);
 protocol_error_free:
        kfree(ticket);
 protocol_error:
index f28122a..b88914d 100644 (file)
  */
 
 #include <linux/ip.h>
+#include <linux/ipv6.h>
 #include <linux/udp.h>
 #include "ar-internal.h"
 
 /*
- * Set up an RxRPC address from a socket buffer.
+ * Fill out a peer address from a socket buffer containing a packet.
  */
-void rxrpc_get_addr_from_skb(struct rxrpc_local *local,
-                            const struct sk_buff *skb,
-                            struct sockaddr_rxrpc *srx)
+int rxrpc_extract_addr_from_skb(struct sockaddr_rxrpc *srx, struct sk_buff *skb)
 {
        memset(srx, 0, sizeof(*srx));
-       srx->transport_type = local->srx.transport_type;
-       srx->transport.family = local->srx.transport.family;
 
-       /* Can we see an ipv4 UDP packet on an ipv6 UDP socket?  and vice
-        * versa?
-        */
-       switch (srx->transport.family) {
-       case AF_INET:
+       switch (ntohs(skb->protocol)) {
+       case ETH_P_IP:
+               srx->transport_type = SOCK_DGRAM;
+               srx->transport_len = sizeof(srx->transport.sin);
+               srx->transport.sin.sin_family = AF_INET;
                srx->transport.sin.sin_port = udp_hdr(skb)->source;
-               srx->transport_len = sizeof(struct sockaddr_in);
-               memcpy(&srx->transport.sin.sin_addr, &ip_hdr(skb)->saddr,
-                      sizeof(struct in_addr));
-               break;
+               srx->transport.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
+               return 0;
+
+       case ETH_P_IPV6:
+               srx->transport_type = SOCK_DGRAM;
+               srx->transport_len = sizeof(srx->transport.sin6);
+               srx->transport.sin6.sin6_family = AF_INET6;
+               srx->transport.sin6.sin6_port = udp_hdr(skb)->source;
+               srx->transport.sin6.sin6_addr = ipv6_hdr(skb)->saddr;
+               return 0;
 
        default:
-               BUG();
+               pr_warn_ratelimited("AF_RXRPC: Unknown eth protocol %u\n",
+                                   ntohs(skb->protocol));
+               return -EAFNOSUPPORT;
        }
 }
index dff92ea..3ddc7bd 100644 (file)
@@ -115,9 +115,9 @@ struct hfsc_class {
        struct gnet_stats_basic_packed bstats;
        struct gnet_stats_queue qstats;
        struct gnet_stats_rate_est64 rate_est;
-       unsigned int    level;          /* class level in hierarchy */
        struct tcf_proto __rcu *filter_list; /* filter list */
        unsigned int    filter_cnt;     /* filter count */
+       unsigned int    level;          /* class level in hierarchy */
 
        struct hfsc_sched *sched;       /* scheduler data */
        struct hfsc_class *cl_parent;   /* parent class */
@@ -165,10 +165,10 @@ struct hfsc_class {
        struct runtime_sc cl_virtual;   /* virtual curve */
        struct runtime_sc cl_ulimit;    /* upperlimit curve */
 
-       unsigned long   cl_flags;       /* which curves are valid */
-       unsigned long   cl_vtperiod;    /* vt period sequence number */
-       unsigned long   cl_parentperiod;/* parent's vt period sequence number*/
-       unsigned long   cl_nactive;     /* number of active children */
+       u8              cl_flags;       /* which curves are valid */
+       u32             cl_vtperiod;    /* vt period sequence number */
+       u32             cl_parentperiod;/* parent's vt period sequence number*/
+       u32             cl_nactive;     /* number of active children */
 };
 
 struct hfsc_sched {
index 91982d9..53dbfa1 100644 (file)
@@ -1122,8 +1122,10 @@ htb_dump_class_stats(struct Qdisc *sch, unsigned long arg, struct gnet_dump *d)
                qlen = cl->un.leaf.q->q.qlen;
                qs.backlog = cl->un.leaf.q->qstats.backlog;
        }
-       cl->xstats.tokens = PSCHED_NS2TICKS(cl->tokens);
-       cl->xstats.ctokens = PSCHED_NS2TICKS(cl->ctokens);
+       cl->xstats.tokens = clamp_t(s64, PSCHED_NS2TICKS(cl->tokens),
+                                   INT_MIN, INT_MAX);
+       cl->xstats.ctokens = clamp_t(s64, PSCHED_NS2TICKS(cl->ctokens),
+                                    INT_MIN, INT_MAX);
 
        if (gnet_stats_copy_basic(qdisc_root_sleeping_running(sch),
                                  d, NULL, &cl->bstats) < 0 ||
index e1849f3..1c23060 100644 (file)
@@ -268,6 +268,7 @@ static struct sctp_association *sctp_association_init(struct sctp_association *a
                goto fail_init;
 
        asoc->active_key_id = ep->active_key_id;
+       asoc->prsctp_enable = ep->prsctp_enable;
 
        /* Save the hmacs and chunks list into this association */
        if (ep->auth_hmacs_list)
index 1eb94bf..a55e547 100644 (file)
@@ -335,13 +335,32 @@ errout:
 /* Check whether this message has expired. */
 int sctp_chunk_abandoned(struct sctp_chunk *chunk)
 {
-       struct sctp_datamsg *msg = chunk->msg;
+       if (!chunk->asoc->prsctp_enable ||
+           !SCTP_PR_POLICY(chunk->sinfo.sinfo_flags)) {
+               struct sctp_datamsg *msg = chunk->msg;
+
+               if (!msg->can_abandon)
+                       return 0;
+
+               if (time_after(jiffies, msg->expires_at))
+                       return 1;
 
-       if (!msg->can_abandon)
                return 0;
+       }
 
-       if (time_after(jiffies, msg->expires_at))
+       if (SCTP_PR_TTL_ENABLED(chunk->sinfo.sinfo_flags) &&
+           time_after(jiffies, chunk->prsctp_param)) {
+               if (chunk->sent_count)
+                       chunk->asoc->abandoned_sent[SCTP_PR_INDEX(TTL)]++;
+               else
+                       chunk->asoc->abandoned_unsent[SCTP_PR_INDEX(TTL)]++;
+               return 1;
+       } else if (SCTP_PR_RTX_ENABLED(chunk->sinfo.sinfo_flags) &&
+                  chunk->sent_count > chunk->prsctp_param) {
+               chunk->asoc->abandoned_sent[SCTP_PR_INDEX(RTX)]++;
                return 1;
+       }
+       /* PRIO policy is processed by sendmsg, not here */
 
        return 0;
 }
index 9d494e3..1f03065 100644 (file)
@@ -163,6 +163,7 @@ static struct sctp_endpoint *sctp_endpoint_init(struct sctp_endpoint *ep,
         */
        ep->auth_hmacs_list = auth_hmacs;
        ep->auth_chunk_list = auth_chunks;
+       ep->prsctp_enable = net->sctp.prsctp_enable;
 
        return ep;
 
index 6f8e676..30d72f7 100644 (file)
@@ -90,17 +90,6 @@ static inline int sctp_rcv_checksum(struct net *net, struct sk_buff *skb)
        return 0;
 }
 
-struct sctp_input_cb {
-       union {
-               struct inet_skb_parm    h4;
-#if IS_ENABLED(CONFIG_IPV6)
-               struct inet6_skb_parm   h6;
-#endif
-       } header;
-       struct sctp_chunk *chunk;
-};
-#define SCTP_INPUT_CB(__skb)   ((struct sctp_input_cb *)&((__skb)->cb[0]))
-
 /*
  * This is the routine which IP calls when receiving an SCTP packet.
  */
@@ -151,6 +140,7 @@ int sctp_rcv(struct sk_buff *skb)
        af = sctp_get_af_specific(family);
        if (unlikely(!af))
                goto discard_it;
+       SCTP_INPUT_CB(skb)->af = af;
 
        /* Initialize local addresses for lookups. */
        af->from_skb(&src, skb, 1);
index edabbbd..9427706 100644 (file)
@@ -217,7 +217,14 @@ new_skb:
                chunk->auth = 0;
                chunk->has_asconf = 0;
                chunk->end_of_packet = 0;
-               chunk->ecn_ce_done = 0;
+               if (chunk->head_skb) {
+                       struct sctp_input_cb
+                               *cb = SCTP_INPUT_CB(chunk->skb),
+                               *head_cb = SCTP_INPUT_CB(chunk->head_skb);
+
+                       cb->chunk = head_cb->chunk;
+                       cb->af = head_cb->af;
+               }
        }
 
        chunk->chunk_hdr = ch;
index 0657d18..ae6f1a2 100644 (file)
@@ -420,6 +420,7 @@ static void sctp_v6_from_skb(union sctp_addr *addr, struct sk_buff *skb,
        addr->v6.sin6_flowinfo = 0; /* FIXME */
        addr->v6.sin6_scope_id = ((struct inet6_skb_parm *)skb->cb)->iif;
 
+       /* Always called on head skb, so this is safe */
        sh = sctp_hdr(skb);
        if (is_saddr) {
                *port  = sh->source;
@@ -710,8 +711,7 @@ static int sctp_v6_addr_to_user(struct sctp_sock *sp, union sctp_addr *addr)
 /* Where did this skb come from?  */
 static int sctp_v6_skb_iif(const struct sk_buff *skb)
 {
-       struct inet6_skb_parm *opt = (struct inet6_skb_parm *) skb->cb;
-       return opt->iif;
+       return IP6CB(skb)->iif;
 }
 
 /* Was this packet marked by Explicit Congestion Notification? */
@@ -780,15 +780,14 @@ static void sctp_inet6_skb_msgname(struct sk_buff *skb, char *msgname,
        if (ip_hdr(skb)->version == 4) {
                addr->v4.sin_family = AF_INET;
                addr->v4.sin_port = sh->source;
-               addr->v4.sin_addr.s_addr =  ip_hdr(skb)->saddr;
+               addr->v4.sin_addr.s_addr = ip_hdr(skb)->saddr;
        } else {
                addr->v6.sin6_family = AF_INET6;
                addr->v6.sin6_flowinfo = 0;
                addr->v6.sin6_port = sh->source;
                addr->v6.sin6_addr = ipv6_hdr(skb)->saddr;
                if (ipv6_addr_type(&addr->v6.sin6_addr) & IPV6_ADDR_LINKLOCAL) {
-                       struct sctp_ulpevent *ev = sctp_skb2event(skb);
-                       addr->v6.sin6_scope_id = ev->iif;
+                       addr->v6.sin6_scope_id = sctp_v6_skb_iif(skb);
                }
        }
 
index a37887b..7e869d0 100644 (file)
@@ -92,7 +92,28 @@ static const struct net_offload sctp_offload = {
        },
 };
 
+static const struct net_offload sctp6_offload = {
+       .callbacks = {
+               .gso_segment = sctp_gso_segment,
+       },
+};
+
 int __init sctp_offload_init(void)
 {
-       return inet_add_offload(&sctp_offload, IPPROTO_SCTP);
+       int ret;
+
+       ret = inet_add_offload(&sctp_offload, IPPROTO_SCTP);
+       if (ret)
+               goto out;
+
+       ret = inet6_add_offload(&sctp6_offload, IPPROTO_SCTP);
+       if (ret)
+               goto ipv4;
+
+       return ret;
+
+ipv4:
+       inet_del_offload(&sctp_offload, IPPROTO_SCTP);
+out:
+       return ret;
 }
index 1541a91..7425f6c 100644 (file)
@@ -316,6 +316,8 @@ static sctp_xmit_t __sctp_packet_append_chunk(struct sctp_packet *packet,
                packet->has_data = 1;
                /* timestamp the chunk for rtx purposes */
                chunk->sent_at = jiffies;
+               /* Mainly used for prsctp RTX policy */
+               chunk->sent_count++;
                break;
        case SCTP_CID_COOKIE_ECHO:
                packet->has_cookie_echo = 1;
@@ -582,9 +584,7 @@ int sctp_packet_transmit(struct sctp_packet *packet, gfp_t gfp)
                         */
                        pkt_size -= WORD_ROUND(chunk->skb->len);
 
-                       if (chunk == packet->auth && !list_empty(&packet->chunk_list))
-                               list_add(&chunk->list, &packet->chunk_list);
-                       else if (!sctp_chunk_is_data(chunk))
+                       if (!sctp_chunk_is_data(chunk) && chunk != packet->auth)
                                sctp_chunk_free(chunk);
 
                        if (!pkt_size)
@@ -605,6 +605,18 @@ int sctp_packet_transmit(struct sctp_packet *packet, gfp_t gfp)
                                                 (struct sctp_auth_chunk *)auth,
                                                 gfp);
 
+               if (packet->auth) {
+                       if (!list_empty(&packet->chunk_list)) {
+                               /* We will generate more packets, so re-queue
+                                * auth chunk.
+                                */
+                               list_add(&chunk->list, &packet->chunk_list);
+                       } else {
+                               sctp_chunk_free(packet->auth);
+                               packet->auth = NULL;
+                       }
+               }
+
                if (!gso)
                        break;
 
@@ -735,6 +747,8 @@ err:
        }
        goto out;
 nomem:
+       if (packet->auth && list_empty(&packet->auth->list))
+               sctp_chunk_free(packet->auth);
        err = -ENOMEM;
        goto err;
 }
index 084718f..72e54a4 100644 (file)
@@ -326,6 +326,9 @@ int sctp_outq_tail(struct sctp_outq *q, struct sctp_chunk *chunk, gfp_t gfp)
 
                        sctp_chunk_hold(chunk);
                        sctp_outq_tail_data(q, chunk);
+                       if (chunk->asoc->prsctp_enable &&
+                           SCTP_PR_PRIO_ENABLED(chunk->sinfo.sinfo_flags))
+                               chunk->asoc->sent_cnt_removable++;
                        if (chunk->chunk_hdr->flags & SCTP_DATA_UNORDERED)
                                SCTP_INC_STATS(net, SCTP_MIB_OUTUNORDERCHUNKS);
                        else
@@ -372,6 +375,96 @@ static void sctp_insert_list(struct list_head *head, struct list_head *new)
                list_add_tail(new, head);
 }
 
+static int sctp_prsctp_prune_sent(struct sctp_association *asoc,
+                                 struct sctp_sndrcvinfo *sinfo,
+                                 struct list_head *queue, int msg_len)
+{
+       struct sctp_chunk *chk, *temp;
+
+       list_for_each_entry_safe(chk, temp, queue, transmitted_list) {
+               if (!SCTP_PR_PRIO_ENABLED(chk->sinfo.sinfo_flags) ||
+                   chk->prsctp_param <= sinfo->sinfo_timetolive)
+                       continue;
+
+               list_del_init(&chk->transmitted_list);
+               sctp_insert_list(&asoc->outqueue.abandoned,
+                                &chk->transmitted_list);
+
+               asoc->sent_cnt_removable--;
+               asoc->abandoned_sent[SCTP_PR_INDEX(PRIO)]++;
+
+               if (!chk->tsn_gap_acked) {
+                       if (chk->transport)
+                               chk->transport->flight_size -=
+                                               sctp_data_size(chk);
+                       asoc->outqueue.outstanding_bytes -= sctp_data_size(chk);
+               }
+
+               msg_len -= SCTP_DATA_SNDSIZE(chk) +
+                          sizeof(struct sk_buff) +
+                          sizeof(struct sctp_chunk);
+               if (msg_len <= 0)
+                       break;
+       }
+
+       return msg_len;
+}
+
+static int sctp_prsctp_prune_unsent(struct sctp_association *asoc,
+                                   struct sctp_sndrcvinfo *sinfo,
+                                   struct list_head *queue, int msg_len)
+{
+       struct sctp_chunk *chk, *temp;
+
+       list_for_each_entry_safe(chk, temp, queue, list) {
+               if (!SCTP_PR_PRIO_ENABLED(chk->sinfo.sinfo_flags) ||
+                   chk->prsctp_param <= sinfo->sinfo_timetolive)
+                       continue;
+
+               list_del_init(&chk->list);
+               asoc->sent_cnt_removable--;
+               asoc->abandoned_unsent[SCTP_PR_INDEX(PRIO)]++;
+
+               msg_len -= SCTP_DATA_SNDSIZE(chk) +
+                          sizeof(struct sk_buff) +
+                          sizeof(struct sctp_chunk);
+               sctp_chunk_free(chk);
+               if (msg_len <= 0)
+                       break;
+       }
+
+       return msg_len;
+}
+
+/* Abandon the chunks according their priorities */
+void sctp_prsctp_prune(struct sctp_association *asoc,
+                      struct sctp_sndrcvinfo *sinfo, int msg_len)
+{
+       struct sctp_transport *transport;
+
+       if (!asoc->prsctp_enable || !asoc->sent_cnt_removable)
+               return;
+
+       msg_len = sctp_prsctp_prune_sent(asoc, sinfo,
+                                        &asoc->outqueue.retransmit,
+                                        msg_len);
+       if (msg_len <= 0)
+               return;
+
+       list_for_each_entry(transport, &asoc->peer.transport_addr_list,
+                           transports) {
+               msg_len = sctp_prsctp_prune_sent(asoc, sinfo,
+                                                &transport->transmitted,
+                                                msg_len);
+               if (msg_len <= 0)
+                       return;
+       }
+
+       sctp_prsctp_prune_unsent(asoc, sinfo,
+                                &asoc->outqueue.out_chunk_list,
+                                msg_len);
+}
+
 /* Mark all the eligible packets on a transport for retransmission.  */
 void sctp_retransmit_mark(struct sctp_outq *q,
                          struct sctp_transport *transport,
@@ -962,6 +1055,9 @@ static int sctp_outq_flush(struct sctp_outq *q, int rtx_timeout, gfp_t gfp)
 
                                /* Mark as failed send. */
                                sctp_chunk_fail(chunk, SCTP_ERROR_INV_STRM);
+                               if (asoc->prsctp_enable &&
+                                   SCTP_PR_PRIO_ENABLED(chunk->sinfo.sinfo_flags))
+                                       asoc->sent_cnt_removable--;
                                sctp_chunk_free(chunk);
                                continue;
                        }
@@ -1251,6 +1347,9 @@ int sctp_outq_sack(struct sctp_outq *q, struct sctp_chunk *chunk)
                tsn = ntohl(tchunk->subh.data_hdr->tsn);
                if (TSN_lte(tsn, ctsn)) {
                        list_del_init(&tchunk->transmitted_list);
+                       if (asoc->prsctp_enable &&
+                           SCTP_PR_PRIO_ENABLED(chunk->sinfo.sinfo_flags))
+                               asoc->sent_cnt_removable--;
                        sctp_chunk_free(tchunk);
                }
        }
index 3b56ae5..1adb927 100644 (file)
@@ -240,6 +240,7 @@ static void sctp_v4_from_skb(union sctp_addr *addr, struct sk_buff *skb,
        port = &addr->v4.sin_port;
        addr->v4.sin_family = AF_INET;
 
+       /* Always called on head skb, so this is safe */
        sh = sctp_hdr(skb);
        if (is_saddr) {
                *port  = sh->source;
index 56f364d..8c77b87 100644 (file)
@@ -108,14 +108,9 @@ static void sctp_control_set_owner_w(struct sctp_chunk *chunk)
 /* What was the inbound interface for this chunk? */
 int sctp_chunk_iif(const struct sctp_chunk *chunk)
 {
-       struct sctp_af *af;
-       int iif = 0;
-
-       af = sctp_get_af_specific(ipver2af(ip_hdr(chunk->skb)->version));
-       if (af)
-               iif = af->skb_iif(chunk->skb);
+       struct sk_buff *skb = chunk->skb;
 
-       return iif;
+       return SCTP_INPUT_CB(skb)->af->skb_iif(skb);
 }
 
 /* RFC 2960 3.3.2 Initiation (INIT) (1)
@@ -261,7 +256,7 @@ struct sctp_chunk *sctp_make_init(const struct sctp_association *asoc,
        chunksize += WORD_ROUND(SCTP_SAT_LEN(num_types));
        chunksize += sizeof(ecap_param);
 
-       if (net->sctp.prsctp_enable)
+       if (asoc->prsctp_enable)
                chunksize += sizeof(prsctp_param);
 
        /* ADDIP: Section 4.2.7:
@@ -355,7 +350,7 @@ struct sctp_chunk *sctp_make_init(const struct sctp_association *asoc,
                sctp_addto_param(retval, num_ext, extensions);
        }
 
-       if (net->sctp.prsctp_enable)
+       if (asoc->prsctp_enable)
                sctp_addto_chunk(retval, sizeof(prsctp_param), &prsctp_param);
 
        if (sp->adaptation_ind) {
@@ -711,6 +706,20 @@ nodata:
        return retval;
 }
 
+static void sctp_set_prsctp_policy(struct sctp_chunk *chunk,
+                                  const struct sctp_sndrcvinfo *sinfo)
+{
+       if (!chunk->asoc->prsctp_enable)
+               return;
+
+       if (SCTP_PR_TTL_ENABLED(sinfo->sinfo_flags))
+               chunk->prsctp_param =
+                       jiffies + msecs_to_jiffies(sinfo->sinfo_timetolive);
+       else if (SCTP_PR_RTX_ENABLED(sinfo->sinfo_flags) ||
+                SCTP_PR_PRIO_ENABLED(sinfo->sinfo_flags))
+               chunk->prsctp_param = sinfo->sinfo_timetolive;
+}
+
 /* Make a DATA chunk for the given association from the provided
  * parameters.  However, do not populate the data payload.
  */
@@ -744,6 +753,7 @@ struct sctp_chunk *sctp_make_datafrag_empty(struct sctp_association *asoc,
 
        retval->subh.data_hdr = sctp_addto_chunk(retval, sizeof(dp), &dp);
        memcpy(&retval->sinfo, sinfo, sizeof(struct sctp_sndrcvinfo));
+       sctp_set_prsctp_policy(retval, sinfo);
 
 nodata:
        return retval;
@@ -1585,7 +1595,6 @@ struct sctp_association *sctp_make_temp_asoc(const struct sctp_endpoint *ep,
        struct sctp_association *asoc;
        struct sk_buff *skb;
        sctp_scope_t scope;
-       struct sctp_af *af;
 
        /* Create the bare association.  */
        scope = sctp_scope(sctp_source(chunk));
@@ -1595,16 +1604,10 @@ struct sctp_association *sctp_make_temp_asoc(const struct sctp_endpoint *ep,
        asoc->temp = 1;
        skb = chunk->skb;
        /* Create an entry for the source address of the packet.  */
-       af = sctp_get_af_specific(ipver2af(ip_hdr(skb)->version));
-       if (unlikely(!af))
-               goto fail;
-       af->from_skb(&asoc->c.peer_addr, skb, 1);
+       SCTP_INPUT_CB(skb)->af->from_skb(&asoc->c.peer_addr, skb, 1);
+
 nodata:
        return asoc;
-
-fail:
-       sctp_association_free(asoc);
-       return NULL;
 }
 
 /* Build a cookie representing asoc.
@@ -2024,8 +2027,8 @@ static void sctp_process_ext_param(struct sctp_association *asoc,
        for (i = 0; i < num_ext; i++) {
                switch (param.ext->chunks[i]) {
                case SCTP_CID_FWD_TSN:
-                       if (net->sctp.prsctp_enable && !asoc->peer.prsctp_capable)
-                                   asoc->peer.prsctp_capable = 1;
+                       if (asoc->prsctp_enable && !asoc->peer.prsctp_capable)
+                               asoc->peer.prsctp_capable = 1;
                        break;
                case SCTP_CID_AUTH:
                        /* if the peer reports AUTH, assume that he
@@ -2169,7 +2172,7 @@ static sctp_ierror_t sctp_verify_param(struct net *net,
                break;
 
        case SCTP_PARAM_FWD_TSN_SUPPORT:
-               if (net->sctp.prsctp_enable)
+               if (ep->prsctp_enable)
                        break;
                goto fallthrough;
 
@@ -2653,7 +2656,7 @@ do_addr_param:
                break;
 
        case SCTP_PARAM_FWD_TSN_SUPPORT:
-               if (net->sctp.prsctp_enable) {
+               if (asoc->prsctp_enable) {
                        asoc->peer.prsctp_capable = 1;
                        break;
                }
index f1f08c8..d88bb2b 100644 (file)
@@ -6118,14 +6118,11 @@ static int sctp_eat_data(const struct sctp_association *asoc,
         * chunk later.
         */
 
-       if (!chunk->ecn_ce_done) {
-               struct sctp_af *af;
+       if (asoc->peer.ecn_capable && !chunk->ecn_ce_done) {
+               struct sctp_af *af = SCTP_INPUT_CB(chunk->skb)->af;
                chunk->ecn_ce_done = 1;
 
-               af = sctp_get_af_specific(
-                       ipver2af(ip_hdr(chunk->skb)->version));
-
-               if (af && af->is_ce(chunk->skb) && asoc->peer.ecn_capable) {
+               if (af->is_ce(sctp_gso_headskb(chunk->skb))) {
                        /* Do real work as sideffect. */
                        sctp_add_cmd_sf(commands, SCTP_CMD_ECN_CE,
                                        SCTP_U32(tsn));
index cdabbd8..d2681cb 100644 (file)
@@ -202,7 +202,7 @@ struct sctp_association *sctp_id2assoc(struct sock *sk, sctp_assoc_t id)
                 * could be a TCP-style listening socket or a socket which
                 * hasn't yet called connect() to establish an association.
                 */
-               if (!sctp_sstate(sk, ESTABLISHED))
+               if (!sctp_sstate(sk, ESTABLISHED) && !sctp_sstate(sk, CLOSING))
                        return NULL;
 
                /* Get the first and the only association from the list. */
@@ -1068,7 +1068,7 @@ static int __sctp_connect(struct sock *sk,
         * is already connected.
         * It cannot be done even on a TCP-style listening socket.
         */
-       if (sctp_sstate(sk, ESTABLISHED) ||
+       if (sctp_sstate(sk, ESTABLISHED) || sctp_sstate(sk, CLOSING) ||
            (sctp_style(sk, TCP) && sctp_sstate(sk, LISTENING))) {
                err = -EISCONN;
                goto out_free;
@@ -1705,18 +1705,19 @@ static int sctp_sendmsg(struct sock *sk, struct msghdr *msg, size_t msg_len)
        if (msg_name) {
                /* Look for a matching association on the endpoint. */
                asoc = sctp_endpoint_lookup_assoc(ep, &to, &transport);
-               if (!asoc) {
-                       /* If we could not find a matching association on the
-                        * endpoint, make sure that it is not a TCP-style
-                        * socket that already has an association or there is
-                        * no peeled-off association on another socket.
-                        */
-                       if ((sctp_style(sk, TCP) &&
-                            sctp_sstate(sk, ESTABLISHED)) ||
-                           sctp_endpoint_is_peeled_off(ep, &to)) {
-                               err = -EADDRNOTAVAIL;
-                               goto out_unlock;
-                       }
+
+               /* If we could not find a matching association on the
+                * endpoint, make sure that it is not a TCP-style
+                * socket that already has an association or there is
+                * no peeled-off association on another socket.
+                */
+               if (!asoc &&
+                   ((sctp_style(sk, TCP) &&
+                     (sctp_sstate(sk, ESTABLISHED) ||
+                      sctp_sstate(sk, CLOSING))) ||
+                    sctp_endpoint_is_peeled_off(ep, &to))) {
+                       err = -EADDRNOTAVAIL;
+                       goto out_unlock;
                }
        } else {
                asoc = sctp_id2assoc(sk, associd);
@@ -1914,6 +1915,9 @@ static int sctp_sendmsg(struct sock *sk, struct msghdr *msg, size_t msg_len)
                goto out_free;
        }
 
+       if (sctp_wspace(asoc) < msg_len)
+               sctp_prsctp_prune(asoc, sinfo, msg_len - sctp_wspace(asoc));
+
        timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
        if (!sctp_wspace(asoc)) {
                err = sctp_wait_for_sndbuf(asoc, &timeo, msg_len);
@@ -2063,7 +2067,7 @@ static int sctp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
 {
        struct sctp_ulpevent *event = NULL;
        struct sctp_sock *sp = sctp_sk(sk);
-       struct sk_buff *skb;
+       struct sk_buff *skb, *head_skb;
        int copied;
        int err = 0;
        int skb_len;
@@ -2074,7 +2078,8 @@ static int sctp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
 
        lock_sock(sk);
 
-       if (sctp_style(sk, TCP) && !sctp_sstate(sk, ESTABLISHED)) {
+       if (sctp_style(sk, TCP) && !sctp_sstate(sk, ESTABLISHED) &&
+           !sctp_sstate(sk, CLOSING)) {
                err = -ENOTCONN;
                goto out;
        }
@@ -2099,12 +2104,16 @@ static int sctp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
        if (err)
                goto out_free;
 
-       sock_recv_ts_and_drops(msg, sk, skb);
+       if (event->chunk && event->chunk->head_skb)
+               head_skb = event->chunk->head_skb;
+       else
+               head_skb = skb;
+       sock_recv_ts_and_drops(msg, sk, head_skb);
        if (sctp_ulpevent_is_notification(event)) {
                msg->msg_flags |= MSG_NOTIFICATION;
                sp->pf->event_msgname(event, msg->msg_name, addr_len);
        } else {
-               sp->pf->skb_msgname(skb, msg->msg_name, addr_len);
+               sp->pf->skb_msgname(head_skb, msg->msg_name, addr_len);
        }
 
        /* Check if we allow SCTP_NXTINFO. */
@@ -3661,6 +3670,80 @@ static int sctp_setsockopt_recvnxtinfo(struct sock *sk,
        return 0;
 }
 
+static int sctp_setsockopt_pr_supported(struct sock *sk,
+                                       char __user *optval,
+                                       unsigned int optlen)
+{
+       struct sctp_assoc_value params;
+       struct sctp_association *asoc;
+       int retval = -EINVAL;
+
+       if (optlen != sizeof(params))
+               goto out;
+
+       if (copy_from_user(&params, optval, optlen)) {
+               retval = -EFAULT;
+               goto out;
+       }
+
+       asoc = sctp_id2assoc(sk, params.assoc_id);
+       if (asoc) {
+               asoc->prsctp_enable = !!params.assoc_value;
+       } else if (!params.assoc_id) {
+               struct sctp_sock *sp = sctp_sk(sk);
+
+               sp->ep->prsctp_enable = !!params.assoc_value;
+       } else {
+               goto out;
+       }
+
+       retval = 0;
+
+out:
+       return retval;
+}
+
+static int sctp_setsockopt_default_prinfo(struct sock *sk,
+                                         char __user *optval,
+                                         unsigned int optlen)
+{
+       struct sctp_default_prinfo info;
+       struct sctp_association *asoc;
+       int retval = -EINVAL;
+
+       if (optlen != sizeof(info))
+               goto out;
+
+       if (copy_from_user(&info, optval, sizeof(info))) {
+               retval = -EFAULT;
+               goto out;
+       }
+
+       if (info.pr_policy & ~SCTP_PR_SCTP_MASK)
+               goto out;
+
+       if (info.pr_policy == SCTP_PR_SCTP_NONE)
+               info.pr_value = 0;
+
+       asoc = sctp_id2assoc(sk, info.pr_assoc_id);
+       if (asoc) {
+               SCTP_PR_SET_POLICY(asoc->default_flags, info.pr_policy);
+               asoc->default_timetolive = info.pr_value;
+       } else if (!info.pr_assoc_id) {
+               struct sctp_sock *sp = sctp_sk(sk);
+
+               SCTP_PR_SET_POLICY(sp->default_flags, info.pr_policy);
+               sp->default_timetolive = info.pr_value;
+       } else {
+               goto out;
+       }
+
+       retval = 0;
+
+out:
+       return retval;
+}
+
 /* API 6.2 setsockopt(), getsockopt()
  *
  * Applications use setsockopt() and getsockopt() to set or retrieve
@@ -3821,6 +3904,12 @@ static int sctp_setsockopt(struct sock *sk, int level, int optname,
        case SCTP_RECVNXTINFO:
                retval = sctp_setsockopt_recvnxtinfo(sk, optval, optlen);
                break;
+       case SCTP_PR_SUPPORTED:
+               retval = sctp_setsockopt_pr_supported(sk, optval, optlen);
+               break;
+       case SCTP_DEFAULT_PRINFO:
+               retval = sctp_setsockopt_default_prinfo(sk, optval, optlen);
+               break;
        default:
                retval = -ENOPROTOOPT;
                break;
@@ -6166,6 +6255,148 @@ static int sctp_getsockopt_recvnxtinfo(struct sock *sk, int len,
        return 0;
 }
 
+static int sctp_getsockopt_pr_supported(struct sock *sk, int len,
+                                       char __user *optval,
+                                       int __user *optlen)
+{
+       struct sctp_assoc_value params;
+       struct sctp_association *asoc;
+       int retval = -EFAULT;
+
+       if (len < sizeof(params)) {
+               retval = -EINVAL;
+               goto out;
+       }
+
+       len = sizeof(params);
+       if (copy_from_user(&params, optval, len))
+               goto out;
+
+       asoc = sctp_id2assoc(sk, params.assoc_id);
+       if (asoc) {
+               params.assoc_value = asoc->prsctp_enable;
+       } else if (!params.assoc_id) {
+               struct sctp_sock *sp = sctp_sk(sk);
+
+               params.assoc_value = sp->ep->prsctp_enable;
+       } else {
+               retval = -EINVAL;
+               goto out;
+       }
+
+       if (put_user(len, optlen))
+               goto out;
+
+       if (copy_to_user(optval, &params, len))
+               goto out;
+
+       retval = 0;
+
+out:
+       return retval;
+}
+
+static int sctp_getsockopt_default_prinfo(struct sock *sk, int len,
+                                         char __user *optval,
+                                         int __user *optlen)
+{
+       struct sctp_default_prinfo info;
+       struct sctp_association *asoc;
+       int retval = -EFAULT;
+
+       if (len < sizeof(info)) {
+               retval = -EINVAL;
+               goto out;
+       }
+
+       len = sizeof(info);
+       if (copy_from_user(&info, optval, len))
+               goto out;
+
+       asoc = sctp_id2assoc(sk, info.pr_assoc_id);
+       if (asoc) {
+               info.pr_policy = SCTP_PR_POLICY(asoc->default_flags);
+               info.pr_value = asoc->default_timetolive;
+       } else if (!info.pr_assoc_id) {
+               struct sctp_sock *sp = sctp_sk(sk);
+
+               info.pr_policy = SCTP_PR_POLICY(sp->default_flags);
+               info.pr_value = sp->default_timetolive;
+       } else {
+               retval = -EINVAL;
+               goto out;
+       }
+
+       if (put_user(len, optlen))
+               goto out;
+
+       if (copy_to_user(optval, &info, len))
+               goto out;
+
+       retval = 0;
+
+out:
+       return retval;
+}
+
+static int sctp_getsockopt_pr_assocstatus(struct sock *sk, int len,
+                                         char __user *optval,
+                                         int __user *optlen)
+{
+       struct sctp_prstatus params;
+       struct sctp_association *asoc;
+       int policy;
+       int retval = -EINVAL;
+
+       if (len < sizeof(params))
+               goto out;
+
+       len = sizeof(params);
+       if (copy_from_user(&params, optval, len)) {
+               retval = -EFAULT;
+               goto out;
+       }
+
+       policy = params.sprstat_policy;
+       if (policy & ~SCTP_PR_SCTP_MASK)
+               goto out;
+
+       asoc = sctp_id2assoc(sk, params.sprstat_assoc_id);
+       if (!asoc)
+               goto out;
+
+       if (policy == SCTP_PR_SCTP_NONE) {
+               params.sprstat_abandoned_unsent = 0;
+               params.sprstat_abandoned_sent = 0;
+               for (policy = 0; policy <= SCTP_PR_INDEX(MAX); policy++) {
+                       params.sprstat_abandoned_unsent +=
+                               asoc->abandoned_unsent[policy];
+                       params.sprstat_abandoned_sent +=
+                               asoc->abandoned_sent[policy];
+               }
+       } else {
+               params.sprstat_abandoned_unsent =
+                       asoc->abandoned_unsent[__SCTP_PR_INDEX(policy)];
+               params.sprstat_abandoned_sent =
+                       asoc->abandoned_sent[__SCTP_PR_INDEX(policy)];
+       }
+
+       if (put_user(len, optlen)) {
+               retval = -EFAULT;
+               goto out;
+       }
+
+       if (copy_to_user(optval, &params, len)) {
+               retval = -EFAULT;
+               goto out;
+       }
+
+       retval = 0;
+
+out:
+       return retval;
+}
+
 static int sctp_getsockopt(struct sock *sk, int level, int optname,
                           char __user *optval, int __user *optlen)
 {
@@ -6319,6 +6550,17 @@ static int sctp_getsockopt(struct sock *sk, int level, int optname,
        case SCTP_RECVNXTINFO:
                retval = sctp_getsockopt_recvnxtinfo(sk, len, optval, optlen);
                break;
+       case SCTP_PR_SUPPORTED:
+               retval = sctp_getsockopt_pr_supported(sk, len, optval, optlen);
+               break;
+       case SCTP_DEFAULT_PRINFO:
+               retval = sctp_getsockopt_default_prinfo(sk, len, optval,
+                                                       optlen);
+               break;
+       case SCTP_PR_ASSOC_STATUS:
+               retval = sctp_getsockopt_pr_assocstatus(sk, len, optval,
+                                                       optlen);
+               break;
        default:
                retval = -ENOPROTOOPT;
                break;
@@ -6866,7 +7108,7 @@ static int sctp_msghdr_parse(const struct msghdr *msg, sctp_cmsgs_t *cmsgs)
 
                        if (cmsgs->srinfo->sinfo_flags &
                            ~(SCTP_UNORDERED | SCTP_ADDR_OVER |
-                             SCTP_SACK_IMMEDIATELY |
+                             SCTP_SACK_IMMEDIATELY | SCTP_PR_SCTP_MASK |
                              SCTP_ABORT | SCTP_EOF))
                                return -EINVAL;
                        break;
@@ -6890,7 +7132,7 @@ static int sctp_msghdr_parse(const struct msghdr *msg, sctp_cmsgs_t *cmsgs)
 
                        if (cmsgs->sinfo->snd_flags &
                            ~(SCTP_UNORDERED | SCTP_ADDR_OVER |
-                             SCTP_SACK_IMMEDIATELY |
+                             SCTP_SACK_IMMEDIATELY | SCTP_PR_SCTP_MASK |
                              SCTP_ABORT | SCTP_EOF))
                                return -EINVAL;
                        break;
index d1e3830..f6219b1 100644 (file)
@@ -51,7 +51,7 @@ static void sctp_ulpevent_release_frag_data(struct sctp_ulpevent *event);
 
 /* Initialize an ULP event from an given skb.  */
 static void sctp_ulpevent_init(struct sctp_ulpevent *event,
-                              int msg_flags,
+                              __u16 msg_flags,
                               unsigned int len)
 {
        memset(event, 0, sizeof(struct sctp_ulpevent));
@@ -60,7 +60,7 @@ static void sctp_ulpevent_init(struct sctp_ulpevent *event,
 }
 
 /* Create a new sctp_ulpevent.  */
-static struct sctp_ulpevent *sctp_ulpevent_new(int size, int msg_flags,
+static struct sctp_ulpevent *sctp_ulpevent_new(int size, __u16 msg_flags,
                                               gfp_t gfp)
 {
        struct sctp_ulpevent *event;
@@ -701,6 +701,12 @@ struct sctp_ulpevent *sctp_ulpevent_make_rcvmsg(struct sctp_association *asoc,
 
        sctp_ulpevent_receive_data(event, asoc);
 
+       /* And hold the chunk as we need it for getting the IP headers
+        * later in recvmsg
+        */
+       sctp_chunk_hold(chunk);
+       event->chunk = chunk;
+
        event->stream = ntohs(chunk->subh.data_hdr->stream);
        event->ssn = ntohs(chunk->subh.data_hdr->ssn);
        event->ppid = chunk->subh.data_hdr->ppid;
@@ -710,11 +716,11 @@ struct sctp_ulpevent *sctp_ulpevent_make_rcvmsg(struct sctp_association *asoc,
        }
        event->tsn = ntohl(chunk->subh.data_hdr->tsn);
        event->msg_flags |= chunk->chunk_hdr->flags;
-       event->iif = sctp_chunk_iif(chunk);
 
        return event;
 
 fail_mark:
+       sctp_chunk_put(chunk);
        kfree_skb(skb);
 fail:
        return NULL;
@@ -1007,6 +1013,7 @@ static void sctp_ulpevent_release_data(struct sctp_ulpevent *event)
 
 done:
        sctp_assoc_rwnd_increase(event->asoc, len);
+       sctp_chunk_put(event->chunk);
        sctp_ulpevent_release_owner(event);
 }
 
@@ -1029,6 +1036,7 @@ static void sctp_ulpevent_release_frag_data(struct sctp_ulpevent *event)
        }
 
 done:
+       sctp_chunk_put(event->chunk);
        sctp_ulpevent_release_owner(event);
 }
 
index 59658b2..a5fc9dd 100644 (file)
@@ -1286,8 +1286,8 @@ void switchdev_fib_ipv4_abort(struct fib_info *fi)
 }
 EXPORT_SYMBOL_GPL(switchdev_fib_ipv4_abort);
 
-static bool switchdev_port_same_parent_id(struct net_device *a,
-                                         struct net_device *b)
+bool switchdev_port_same_parent_id(struct net_device *a,
+                                  struct net_device *b)
 {
        struct switchdev_attr a_attr = {
                .orig_dev = a,
@@ -1323,6 +1323,7 @@ static u32 switchdev_port_fwd_mark_get(struct net_device *dev,
 
        return dev->ifindex;
 }
+EXPORT_SYMBOL_GPL(switchdev_port_same_parent_id);
 
 static void switchdev_port_fwd_mark_reset(struct net_device *group_dev,
                                          u32 old_mark, u32 *reset_mark)
index 8584cc4..4131d5a 100644 (file)
@@ -335,6 +335,21 @@ static int tipc_reset_bearer(struct net *net, struct tipc_bearer *b)
        return 0;
 }
 
+/* tipc_bearer_reset_all - reset all links on all bearers
+ */
+void tipc_bearer_reset_all(struct net *net)
+{
+       struct tipc_net *tn = tipc_net(net);
+       struct tipc_bearer *b;
+       int i;
+
+       for (i = 0; i < MAX_BEARERS; i++) {
+               b = rcu_dereference_rtnl(tn->bearer_list[i]);
+               if (b)
+                       tipc_reset_bearer(net, b);
+       }
+}
+
 /**
  * bearer_disable
  *
index 0d337c7..f1e6db5 100644 (file)
@@ -198,6 +198,7 @@ void tipc_bearer_add_dest(struct net *net, u32 bearer_id, u32 dest);
 void tipc_bearer_remove_dest(struct net *net, u32 bearer_id, u32 dest);
 struct tipc_bearer *tipc_bearer_find(struct net *net, const char *name);
 struct tipc_media *tipc_media_find(const char *name);
+void tipc_bearer_reset_all(struct net *net);
 int tipc_bearer_setup(void);
 void tipc_bearer_cleanup(void);
 void tipc_bearer_stop(struct net *net);
index c1df33f..877d94f 100644 (file)
@@ -350,6 +350,8 @@ void tipc_link_remove_bc_peer(struct tipc_link *snd_l,
        u16 ack = snd_l->snd_nxt - 1;
 
        snd_l->ackers--;
+       rcv_l->bc_peer_is_up = true;
+       rcv_l->state = LINK_ESTABLISHED;
        tipc_link_bc_ack_rcv(rcv_l, ack, xmitq);
        tipc_link_reset(rcv_l);
        rcv_l->state = LINK_RESET;
@@ -1582,7 +1584,12 @@ void tipc_link_bc_sync_rcv(struct tipc_link *l, struct tipc_msg *hdr,
        if (!msg_peer_node_is_up(hdr))
                return;
 
-       l->bc_peer_is_up = true;
+       /* Open when peer ackowledges our bcast init msg (pkt #1) */
+       if (msg_ack(hdr))
+               l->bc_peer_is_up = true;
+
+       if (!l->bc_peer_is_up)
+               return;
 
        /* Ignore if peers_snd_nxt goes beyond receive window */
        if (more(peers_snd_nxt, l->rcv_nxt + l->window))
index a3fc0a3..95cc78b 100644 (file)
@@ -1297,10 +1297,6 @@ static void tipc_node_bc_rcv(struct net *net, struct sk_buff *skb, int bearer_id
 
        rc = tipc_bcast_rcv(net, be->link, skb);
 
-       /* Broadcast link reset may happen at reassembly failure */
-       if (rc & TIPC_LINK_DOWN_EVT)
-               tipc_node_reset_links(n);
-
        /* Broadcast ACKs are sent on a unicast link */
        if (rc & TIPC_LINK_SND_BC_ACK) {
                tipc_node_read_lock(n);
@@ -1320,6 +1316,17 @@ static void tipc_node_bc_rcv(struct net *net, struct sk_buff *skb, int bearer_id
                spin_unlock_bh(&be->inputq2.lock);
                tipc_sk_mcast_rcv(net, &be->arrvq, &be->inputq2);
        }
+
+       if (rc & TIPC_LINK_DOWN_EVT) {
+               /* Reception reassembly failure => reset all links to peer */
+               if (!tipc_link_is_up(be->link))
+                       tipc_node_reset_links(n);
+
+               /* Retransmission failure => reset all links to all peers */
+               if (!tipc_link_is_up(tipc_bc_sndlink(net)))
+                       tipc_bearer_reset_all(net);
+       }
+
        tipc_node_put(n);
 }
 
index 5782f71..46417f9 100644 (file)
@@ -3568,16 +3568,16 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
                params.smps_mode = NL80211_SMPS_OFF;
        }
 
+       params.pbss = nla_get_flag(info->attrs[NL80211_ATTR_PBSS]);
+       if (params.pbss && !rdev->wiphy.bands[NL80211_BAND_60GHZ])
+               return -EOPNOTSUPP;
+
        if (info->attrs[NL80211_ATTR_ACL_POLICY]) {
                params.acl = parse_acl_data(&rdev->wiphy, info);
                if (IS_ERR(params.acl))
                        return PTR_ERR(params.acl);
        }
 
-       params.pbss = nla_get_flag(info->attrs[NL80211_ATTR_PBSS]);
-       if (params.pbss && !rdev->wiphy.bands[NL80211_BAND_60GHZ])
-               return -EOPNOTSUPP;
-
        wdev_lock(wdev);
        err = rdev_start_ap(rdev, dev, &params);
        if (!err) {
index 2443ee3..b7d1592 100644 (file)
@@ -721,6 +721,8 @@ __ieee80211_amsdu_copy(struct sk_buff *skb, unsigned int hlen,
         * alignment since sizeof(struct ethhdr) is 14.
         */
        frame = dev_alloc_skb(hlen + sizeof(struct ethhdr) + 2 + cur_len);
+       if (!frame)
+               return NULL;
 
        skb_reserve(frame, hlen + sizeof(struct ethhdr) + 2);
        skb_copy_bits(skb, offset, skb_put(frame, cur_len), cur_len);
index a98b780..d2d2b35 100644 (file)
@@ -21,6 +21,8 @@ hostprogs-y += spintest
 hostprogs-y += map_perf_test
 hostprogs-y += test_overhead
 hostprogs-y += test_cgrp2_array_pin
+hostprogs-y += xdp1
+hostprogs-y += xdp2
 
 test_verifier-objs := test_verifier.o libbpf.o
 test_maps-objs := test_maps.o libbpf.o
@@ -42,6 +44,9 @@ spintest-objs := bpf_load.o libbpf.o spintest_user.o
 map_perf_test-objs := bpf_load.o libbpf.o map_perf_test_user.o
 test_overhead-objs := bpf_load.o libbpf.o test_overhead_user.o
 test_cgrp2_array_pin-objs := libbpf.o test_cgrp2_array_pin.o
+xdp1-objs := bpf_load.o libbpf.o xdp1_user.o
+# reuse xdp1 source intentionally
+xdp2-objs := bpf_load.o libbpf.o xdp1_user.o
 
 # Tell kbuild to always build the programs
 always := $(hostprogs-y)
@@ -64,6 +69,8 @@ always += test_overhead_tp_kern.o
 always += test_overhead_kprobe_kern.o
 always += parse_varlen.o parse_simple.o parse_ldabs.o
 always += test_cgrp2_tc_kern.o
+always += xdp1_kern.o
+always += xdp2_kern.o
 
 HOSTCFLAGS += -I$(objtree)/usr/include
 
@@ -84,6 +91,8 @@ HOSTLOADLIBES_offwaketime += -lelf
 HOSTLOADLIBES_spintest += -lelf
 HOSTLOADLIBES_map_perf_test += -lelf -lrt
 HOSTLOADLIBES_test_overhead += -lelf -lrt
+HOSTLOADLIBES_xdp1 += -lelf
+HOSTLOADLIBES_xdp2 += -lelf
 
 # Allows pointing LLC/CLANG to a LLVM backend with bpf support, redefine on cmdline:
 #  make samples/bpf/ LLC=~/git/llvm/build/bin/llc CLANG=~/git/llvm/build/bin/clang
index 022af71..0cfda23 100644 (file)
@@ -50,6 +50,7 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
        bool is_kprobe = strncmp(event, "kprobe/", 7) == 0;
        bool is_kretprobe = strncmp(event, "kretprobe/", 10) == 0;
        bool is_tracepoint = strncmp(event, "tracepoint/", 11) == 0;
+       bool is_xdp = strncmp(event, "xdp", 3) == 0;
        enum bpf_prog_type prog_type;
        char buf[256];
        int fd, efd, err, id;
@@ -66,6 +67,8 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
                prog_type = BPF_PROG_TYPE_KPROBE;
        } else if (is_tracepoint) {
                prog_type = BPF_PROG_TYPE_TRACEPOINT;
+       } else if (is_xdp) {
+               prog_type = BPF_PROG_TYPE_XDP;
        } else {
                printf("Unknown event '%s'\n", event);
                return -1;
@@ -79,6 +82,9 @@ static int load_and_attach(const char *event, struct bpf_insn *prog, int size)
 
        prog_fd[prog_cnt++] = fd;
 
+       if (is_xdp)
+               return 0;
+
        if (is_socket) {
                event += 6;
                if (*event != '/')
@@ -319,6 +325,7 @@ int load_bpf_file(char *path)
                        if (memcmp(shname_prog, "kprobe/", 7) == 0 ||
                            memcmp(shname_prog, "kretprobe/", 10) == 0 ||
                            memcmp(shname_prog, "tracepoint/", 11) == 0 ||
+                           memcmp(shname_prog, "xdp", 3) == 0 ||
                            memcmp(shname_prog, "socket", 6) == 0)
                                load_and_attach(shname_prog, insns, data_prog->d_size);
                }
@@ -336,6 +343,7 @@ int load_bpf_file(char *path)
                if (memcmp(shname, "kprobe/", 7) == 0 ||
                    memcmp(shname, "kretprobe/", 10) == 0 ||
                    memcmp(shname, "tracepoint/", 11) == 0 ||
+                   memcmp(shname, "xdp", 3) == 0 ||
                    memcmp(shname, "socket", 6) == 0)
                        load_and_attach(shname, data->d_buf, data->d_size);
        }
diff --git a/samples/bpf/xdp1_kern.c b/samples/bpf/xdp1_kern.c
new file mode 100644 (file)
index 0000000..2197421
--- /dev/null
@@ -0,0 +1,93 @@
+/* Copyright (c) 2016 PLUMgrid
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#define KBUILD_MODNAME "foo"
+#include <uapi/linux/bpf.h>
+#include <linux/in.h>
+#include <linux/if_ether.h>
+#include <linux/if_packet.h>
+#include <linux/if_vlan.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include "bpf_helpers.h"
+
+struct bpf_map_def SEC("maps") rxcnt = {
+       .type = BPF_MAP_TYPE_PERCPU_ARRAY,
+       .key_size = sizeof(u32),
+       .value_size = sizeof(long),
+       .max_entries = 256,
+};
+
+static int parse_ipv4(void *data, u64 nh_off, void *data_end)
+{
+       struct iphdr *iph = data + nh_off;
+
+       if (iph + 1 > data_end)
+               return 0;
+       return iph->protocol;
+}
+
+static int parse_ipv6(void *data, u64 nh_off, void *data_end)
+{
+       struct ipv6hdr *ip6h = data + nh_off;
+
+       if (ip6h + 1 > data_end)
+               return 0;
+       return ip6h->nexthdr;
+}
+
+SEC("xdp1")
+int xdp_prog1(struct xdp_md *ctx)
+{
+       void *data_end = (void *)(long)ctx->data_end;
+       void *data = (void *)(long)ctx->data;
+       struct ethhdr *eth = data;
+       int rc = XDP_DROP;
+       long *value;
+       u16 h_proto;
+       u64 nh_off;
+       u32 ipproto;
+
+       nh_off = sizeof(*eth);
+       if (data + nh_off > data_end)
+               return rc;
+
+       h_proto = eth->h_proto;
+
+       if (h_proto == htons(ETH_P_8021Q) || h_proto == htons(ETH_P_8021AD)) {
+               struct vlan_hdr *vhdr;
+
+               vhdr = data + nh_off;
+               nh_off += sizeof(struct vlan_hdr);
+               if (data + nh_off > data_end)
+                       return rc;
+               h_proto = vhdr->h_vlan_encapsulated_proto;
+       }
+       if (h_proto == htons(ETH_P_8021Q) || h_proto == htons(ETH_P_8021AD)) {
+               struct vlan_hdr *vhdr;
+
+               vhdr = data + nh_off;
+               nh_off += sizeof(struct vlan_hdr);
+               if (data + nh_off > data_end)
+                       return rc;
+               h_proto = vhdr->h_vlan_encapsulated_proto;
+       }
+
+       if (h_proto == htons(ETH_P_IP))
+               ipproto = parse_ipv4(data, nh_off, data_end);
+       else if (h_proto == htons(ETH_P_IPV6))
+               ipproto = parse_ipv6(data, nh_off, data_end);
+       else
+               ipproto = 0;
+
+       value = bpf_map_lookup_elem(&rxcnt, &ipproto);
+       if (value)
+               *value += 1;
+
+       return rc;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/samples/bpf/xdp1_user.c b/samples/bpf/xdp1_user.c
new file mode 100644 (file)
index 0000000..a5e109e
--- /dev/null
@@ -0,0 +1,181 @@
+/* Copyright (c) 2016 PLUMgrid
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#include <linux/bpf.h>
+#include <linux/netlink.h>
+#include <linux/rtnetlink.h>
+#include <assert.h>
+#include <errno.h>
+#include <signal.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/socket.h>
+#include <unistd.h>
+#include "bpf_load.h"
+#include "libbpf.h"
+
+static int set_link_xdp_fd(int ifindex, int fd)
+{
+       struct sockaddr_nl sa;
+       int sock, seq = 0, len, ret = -1;
+       char buf[4096];
+       struct nlattr *nla, *nla_xdp;
+       struct {
+               struct nlmsghdr  nh;
+               struct ifinfomsg ifinfo;
+               char             attrbuf[64];
+       } req;
+       struct nlmsghdr *nh;
+       struct nlmsgerr *err;
+
+       memset(&sa, 0, sizeof(sa));
+       sa.nl_family = AF_NETLINK;
+
+       sock = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);
+       if (sock < 0) {
+               printf("open netlink socket: %s\n", strerror(errno));
+               return -1;
+       }
+
+       if (bind(sock, (struct sockaddr *)&sa, sizeof(sa)) < 0) {
+               printf("bind to netlink: %s\n", strerror(errno));
+               goto cleanup;
+       }
+
+       memset(&req, 0, sizeof(req));
+       req.nh.nlmsg_len = NLMSG_LENGTH(sizeof(struct ifinfomsg));
+       req.nh.nlmsg_flags = NLM_F_REQUEST | NLM_F_ACK;
+       req.nh.nlmsg_type = RTM_SETLINK;
+       req.nh.nlmsg_pid = 0;
+       req.nh.nlmsg_seq = ++seq;
+       req.ifinfo.ifi_family = AF_UNSPEC;
+       req.ifinfo.ifi_index = ifindex;
+       nla = (struct nlattr *)(((char *)&req)
+                               + NLMSG_ALIGN(req.nh.nlmsg_len));
+       nla->nla_type = NLA_F_NESTED | 43/*IFLA_XDP*/;
+
+       nla_xdp = (struct nlattr *)((char *)nla + NLA_HDRLEN);
+       nla_xdp->nla_type = 1/*IFLA_XDP_FD*/;
+       nla_xdp->nla_len = NLA_HDRLEN + sizeof(int);
+       memcpy((char *)nla_xdp + NLA_HDRLEN, &fd, sizeof(fd));
+       nla->nla_len = NLA_HDRLEN + nla_xdp->nla_len;
+
+       req.nh.nlmsg_len += NLA_ALIGN(nla->nla_len);
+
+       if (send(sock, &req, req.nh.nlmsg_len, 0) < 0) {
+               printf("send to netlink: %s\n", strerror(errno));
+               goto cleanup;
+       }
+
+       len = recv(sock, buf, sizeof(buf), 0);
+       if (len < 0) {
+               printf("recv from netlink: %s\n", strerror(errno));
+               goto cleanup;
+       }
+
+       for (nh = (struct nlmsghdr *)buf; NLMSG_OK(nh, len);
+            nh = NLMSG_NEXT(nh, len)) {
+               if (nh->nlmsg_pid != getpid()) {
+                       printf("Wrong pid %d, expected %d\n",
+                              nh->nlmsg_pid, getpid());
+                       goto cleanup;
+               }
+               if (nh->nlmsg_seq != seq) {
+                       printf("Wrong seq %d, expected %d\n",
+                              nh->nlmsg_seq, seq);
+                       goto cleanup;
+               }
+               switch (nh->nlmsg_type) {
+               case NLMSG_ERROR:
+                       err = (struct nlmsgerr *)NLMSG_DATA(nh);
+                       if (!err->error)
+                               continue;
+                       printf("nlmsg error %s\n", strerror(-err->error));
+                       goto cleanup;
+               case NLMSG_DONE:
+                       break;
+               }
+       }
+
+       ret = 0;
+
+cleanup:
+       close(sock);
+       return ret;
+}
+
+static int ifindex;
+
+static void int_exit(int sig)
+{
+       set_link_xdp_fd(ifindex, -1);
+       exit(0);
+}
+
+/* simple per-protocol drop counter
+ */
+static void poll_stats(int interval)
+{
+       unsigned int nr_cpus = sysconf(_SC_NPROCESSORS_CONF);
+       const unsigned int nr_keys = 256;
+       __u64 values[nr_cpus], prev[nr_keys][nr_cpus];
+       __u32 key;
+       int i;
+
+       memset(prev, 0, sizeof(prev));
+
+       while (1) {
+               sleep(interval);
+
+               for (key = 0; key < nr_keys; key++) {
+                       __u64 sum = 0;
+
+                       assert(bpf_lookup_elem(map_fd[0], &key, values) == 0);
+                       for (i = 0; i < nr_cpus; i++)
+                               sum += (values[i] - prev[key][i]);
+                       if (sum)
+                               printf("proto %u: %10llu pkt/s\n",
+                                      key, sum / interval);
+                       memcpy(prev[key], values, sizeof(values));
+               }
+       }
+}
+
+int main(int ac, char **argv)
+{
+       char filename[256];
+
+       snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
+
+       if (ac != 2) {
+               printf("usage: %s IFINDEX\n", argv[0]);
+               return 1;
+       }
+
+       ifindex = strtoul(argv[1], NULL, 0);
+
+       if (load_bpf_file(filename)) {
+               printf("%s", bpf_log_buf);
+               return 1;
+       }
+
+       if (!prog_fd[0]) {
+               printf("load_bpf_file: %s\n", strerror(errno));
+               return 1;
+       }
+
+       signal(SIGINT, int_exit);
+
+       if (set_link_xdp_fd(ifindex, prog_fd[0]) < 0) {
+               printf("link set xdp fd failed\n");
+               return 1;
+       }
+
+       poll_stats(2);
+
+       return 0;
+}
diff --git a/samples/bpf/xdp2_kern.c b/samples/bpf/xdp2_kern.c
new file mode 100644 (file)
index 0000000..e012888
--- /dev/null
@@ -0,0 +1,114 @@
+/* Copyright (c) 2016 PLUMgrid
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ */
+#define KBUILD_MODNAME "foo"
+#include <uapi/linux/bpf.h>
+#include <linux/in.h>
+#include <linux/if_ether.h>
+#include <linux/if_packet.h>
+#include <linux/if_vlan.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include "bpf_helpers.h"
+
+struct bpf_map_def SEC("maps") rxcnt = {
+       .type = BPF_MAP_TYPE_PERCPU_ARRAY,
+       .key_size = sizeof(u32),
+       .value_size = sizeof(long),
+       .max_entries = 256,
+};
+
+static void swap_src_dst_mac(void *data)
+{
+       unsigned short *p = data;
+       unsigned short dst[3];
+
+       dst[0] = p[0];
+       dst[1] = p[1];
+       dst[2] = p[2];
+       p[0] = p[3];
+       p[1] = p[4];
+       p[2] = p[5];
+       p[3] = dst[0];
+       p[4] = dst[1];
+       p[5] = dst[2];
+}
+
+static int parse_ipv4(void *data, u64 nh_off, void *data_end)
+{
+       struct iphdr *iph = data + nh_off;
+
+       if (iph + 1 > data_end)
+               return 0;
+       return iph->protocol;
+}
+
+static int parse_ipv6(void *data, u64 nh_off, void *data_end)
+{
+       struct ipv6hdr *ip6h = data + nh_off;
+
+       if (ip6h + 1 > data_end)
+               return 0;
+       return ip6h->nexthdr;
+}
+
+SEC("xdp1")
+int xdp_prog1(struct xdp_md *ctx)
+{
+       void *data_end = (void *)(long)ctx->data_end;
+       void *data = (void *)(long)ctx->data;
+       struct ethhdr *eth = data;
+       int rc = XDP_DROP;
+       long *value;
+       u16 h_proto;
+       u64 nh_off;
+       u32 ipproto;
+
+       nh_off = sizeof(*eth);
+       if (data + nh_off > data_end)
+               return rc;
+
+       h_proto = eth->h_proto;
+
+       if (h_proto == htons(ETH_P_8021Q) || h_proto == htons(ETH_P_8021AD)) {
+               struct vlan_hdr *vhdr;
+
+               vhdr = data + nh_off;
+               nh_off += sizeof(struct vlan_hdr);
+               if (data + nh_off > data_end)
+                       return rc;
+               h_proto = vhdr->h_vlan_encapsulated_proto;
+       }
+       if (h_proto == htons(ETH_P_8021Q) || h_proto == htons(ETH_P_8021AD)) {
+               struct vlan_hdr *vhdr;
+
+               vhdr = data + nh_off;
+               nh_off += sizeof(struct vlan_hdr);
+               if (data + nh_off > data_end)
+                       return rc;
+               h_proto = vhdr->h_vlan_encapsulated_proto;
+       }
+
+       if (h_proto == htons(ETH_P_IP))
+               ipproto = parse_ipv4(data, nh_off, data_end);
+       else if (h_proto == htons(ETH_P_IPV6))
+               ipproto = parse_ipv6(data, nh_off, data_end);
+       else
+               ipproto = 0;
+
+       value = bpf_map_lookup_elem(&rxcnt, &ipproto);
+       if (value)
+               *value += 1;
+
+       if (ipproto == IPPROTO_UDP) {
+               swap_src_dst_mac(data);
+               rc = XDP_TX;
+       }
+
+       return rc;
+}
+
+char _license[] SEC("license") = "GPL";
index 33b70fd..f70ea7d 100644 (file)
@@ -14,12 +14,13 @@ function usage() {
     echo "  -b : (\$BURST)     HW level bursting of SKBs"
     echo "  -v : (\$VERBOSE)   verbose"
     echo "  -x : (\$DEBUG)     debug"
+    echo "  -6 : (\$IP6)       IPv6"
     echo ""
 }
 
 ##  --- Parse command line arguments / parameters ---
 ## echo "Commandline options:"
-while getopts "s:i:d:m:t:c:b:vxh" option; do
+while getopts "s:i:d:m:t:c:b:vxh6" option; do
     case $option in
         i) # interface
           export DEV=$OPTARG
@@ -59,6 +60,10 @@ while getopts "s:i:d:m:t:c:b:vxh" option; do
           export DEBUG=yes
           info "Debug mode: DEBUG=$DEBUG"
           ;;
+       6)
+         export IP6=6
+         info "IP6: IP6=$IP6"
+         ;;
         h|?|*)
           usage;
           err 2 "[ERROR] Unknown parameters!!!"
diff --git a/samples/pktgen/pktgen.conf-1-1-flows b/samples/pktgen/pktgen.conf-1-1-flows
deleted file mode 100755 (executable)
index 081749c..0000000
+++ /dev/null
@@ -1,67 +0,0 @@
-#!/bin/bash
-
-#modprobe pktgen
-
-
-function pgset() {
-    local result
-
-    echo $1 > $PGDEV
-
-    result=`cat $PGDEV | fgrep "Result: OK:"`
-    if [ "$result" = "" ]; then
-         cat $PGDEV | fgrep Result:
-    fi
-}
-
-# Config Start Here -----------------------------------------------------------
-
-
-# thread config
-# Each CPU has its own thread. One CPU example. We add eth1.
-
-PGDEV=/proc/net/pktgen/kpktgend_0
-  echo "Removing all devices"
- pgset "rem_device_all"
-  echo "Adding eth1"
- pgset "add_device eth1"
-
-
-# device config
-# delay 0
-# We need to do alloc for every skb since we cannot clone here.
-
-CLONE_SKB="clone_skb 0"
-# NIC adds 4 bytes CRC
-PKT_SIZE="pkt_size 60"
-
-# COUNT 0 means forever
-#COUNT="count 0"
-COUNT="count 10000000"
-DELAY="delay 0"
-
-PGDEV=/proc/net/pktgen/eth1
-  echo "Configuring $PGDEV"
- pgset "$COUNT"
- pgset "$CLONE_SKB"
- pgset "$PKT_SIZE"
- pgset "$DELAY"
- # Random address with in the min-max range
- pgset "flag IPDST_RND"
- pgset "dst_min 10.0.0.0"
- pgset "dst_max 10.255.255.255"
-
- # 8k Concurrent flows at 4 pkts
- pgset "flows 8192"
- pgset "flowlen 4"
-
- pgset "dst_mac  00:04:23:08:91:dc"
-
-# Time to run
-PGDEV=/proc/net/pktgen/pgctrl
-
- echo "Running... ctrl^C to stop"
- trap true INT
- pgset "start"
- echo "Done"
- cat /proc/net/pktgen/eth1
diff --git a/samples/pktgen/pktgen.conf-1-1-rdos b/samples/pktgen/pktgen.conf-1-1-rdos
deleted file mode 100755 (executable)
index c7553be..0000000
+++ /dev/null
@@ -1,64 +0,0 @@
-#!/bin/bash
-
-#modprobe pktgen
-
-
-function pgset() {
-    local result
-
-    echo $1 > $PGDEV
-
-    result=`cat $PGDEV | fgrep "Result: OK:"`
-    if [ "$result" = "" ]; then
-         cat $PGDEV | fgrep Result:
-    fi
-}
-
-# Config Start Here -----------------------------------------------------------
-
-
-# thread config
-# Each CPU has its own thread. One CPU example. We add eth1.
-
-PGDEV=/proc/net/pktgen/kpktgend_0
-  echo "Removing all devices"
- pgset "rem_device_all"
-  echo "Adding eth1"
- pgset "add_device eth1"
-
-
-# device config
-# delay 0
-
-# We need to do alloc for every skb since we cannot clone here.
-
-CLONE_SKB="clone_skb 0"
-# NIC adds 4 bytes CRC
-PKT_SIZE="pkt_size 60"
-
-# COUNT 0 means forever
-#COUNT="count 0"
-COUNT="count 10000000"
-DELAY="delay 0"
-
-PGDEV=/proc/net/pktgen/eth1
-  echo "Configuring $PGDEV"
- pgset "$COUNT"
- pgset "$CLONE_SKB"
- pgset "$PKT_SIZE"
- pgset "$DELAY"
- # Random address with in the min-max range
- pgset "flag IPDST_RND"
- pgset "dst_min 10.0.0.0"
- pgset "dst_max 10.255.255.255"
-
- pgset "dst_mac  00:04:23:08:91:dc"
-
-# Time to run
-PGDEV=/proc/net/pktgen/pgctrl
-
- echo "Running... ctrl^C to stop"
- trap true INT
- pgset "start"
- echo "Done"
- cat /proc/net/pktgen/eth1
index cb15903..f3e1bed 100755 (executable)
@@ -34,7 +34,9 @@ root_check_run_with_sudo "$@"
 source ${basedir}/parameters.sh
 # Using invalid DST_MAC will cause the packets to get dropped in
 # ip_rcv() which is part of the test
-[ -z "$DEST_IP" ] && DEST_IP="198.18.0.42"
+if [ -z "$DEST_IP" ]; then
+    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
+fi
 [ -z "$DST_MAC" ] && DST_MAC="90:e2:ba:ff:ff:ff"
 [ -z "$BURST" ] && BURST=1024
 
@@ -64,7 +66,7 @@ for ((thread = 0; thread < $THREADS; thread++)); do
 
     # Destination
     pg_set $dev "dst_mac $DST_MAC"
-    pg_set $dev "dst $DEST_IP"
+    pg_set $dev "dst$IP6 $DEST_IP"
 
     # Inject packet into RX path of stack
     pg_set $dev "xmit_mode netif_receive"
index 4e4e92b..cc102e9 100755 (executable)
@@ -13,7 +13,9 @@ root_check_run_with_sudo "$@"
 
 # Parameter parsing via include
 source ${basedir}/parameters.sh
-[ -z "$DEST_IP" ] && DEST_IP="198.18.0.42"
+if [ -z "$DEST_IP" ]; then
+    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
+fi
 [ -z "$DST_MAC" ] && DST_MAC="90:e2:ba:ff:ff:ff"
 
 # Burst greater than 1 are invalid for queue_xmit mode
@@ -47,7 +49,7 @@ for ((thread = 0; thread < $THREADS; thread++)); do
 
     # Destination
     pg_set $dev "dst_mac $DST_MAC"
-    pg_set $dev "dst $DEST_IP"
+    pg_set $dev "dst$IP6 $DEST_IP"
 
     # Inject packet into TX qdisc egress path of stack
     pg_set $dev "xmit_mode queue_xmit"
index 8c9d318..29ef4ba 100755 (executable)
@@ -14,7 +14,9 @@ root_check_run_with_sudo "$@"
 source ${basedir}/parameters.sh
 #
 # Set some default params, if they didn't get set
-[ -z "$DEST_IP" ] && DEST_IP="198.18.0.42"
+if [ -z "$DEST_IP" ]; then
+    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
+fi
 [ -z "$CLONE_SKB" ] && CLONE_SKB="0"
 # Example enforce param "-m" for dst_mac
 [ -z "$DST_MAC" ] && usage && err 2 "Must specify -m dst_mac"
@@ -54,7 +56,7 @@ pg_set $DEV "flag NO_TIMESTAMP"
 
 # Destination
 pg_set $DEV "dst_mac $DST_MAC"
-pg_set $DEV "dst $DEST_IP"
+pg_set $DEV "dst$IP6 $DEST_IP"
 
 # Setup random UDP port src range
 pg_set $DEV "flag UDPSRC_RND"
index 32467ae..c88a161 100755 (executable)
@@ -23,7 +23,9 @@ UDP_MIN=9
 UDP_MAX=109
 
 # (example of setting default params in your script)
-[ -z "$DEST_IP" ] && DEST_IP="198.18.0.42"
+if [ -z "$DEST_IP" ]; then
+    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
+fi
 [ -z "$DST_MAC" ] && DST_MAC="90:e2:ba:ff:ff:ff"
 
 # General cleanup everything since last run
@@ -54,7 +56,7 @@ for ((thread = 0; thread < $THREADS; thread++)); do
 
     # Destination
     pg_set $dev "dst_mac $DST_MAC"
-    pg_set $dev "dst $DEST_IP"
+    pg_set $dev "dst$IP6 $DEST_IP"
 
     # Setup random UDP port src range
     pg_set $dev "flag UDPSRC_RND"
index 775f5d0..80cf8f5 100755 (executable)
@@ -25,7 +25,9 @@ root_check_run_with_sudo "$@"
 # Parameter parsing via include
 source ${basedir}/parameters.sh
 # Set some default params, if they didn't get set
-[ -z "$DEST_IP" ]   && DEST_IP="198.18.0.42"
+if [ -z "$DEST_IP" ]; then
+    [ -z "$IP6" ] && DEST_IP="198.18.0.42" || DEST_IP="FD00::1"
+fi
 [ -z "$DST_MAC" ]   && DST_MAC="90:e2:ba:ff:ff:ff"
 [ -z "$BURST" ]     && BURST=32
 [ -z "$CLONE_SKB" ] && CLONE_SKB="100000"
@@ -55,7 +57,7 @@ for ((thread = 0; thread < $THREADS; thread++)); do
 
     # Destination
     pg_set $dev "dst_mac $DST_MAC"
-    pg_set $dev "dst $DEST_IP"
+    pg_set $dev "dst$IP6 $DEST_IP"
 
     # Setup burst, for easy testing -b 0 disable bursting
     # (internally in pktgen default and minimum burst=1)
diff --git a/samples/pktgen/pktgen_sample04_many_flows.sh b/samples/pktgen/pktgen_sample04_many_flows.sh
new file mode 100755 (executable)
index 0000000..f60412e
--- /dev/null
@@ -0,0 +1,93 @@
+#!/bin/bash
+#
+# Script example for many flows testing
+#
+# Number of simultaneous flows limited by variable $FLOWS
+# and number of packets per flow controlled by variable $FLOWLEN
+#
+basedir=`dirname $0`
+source ${basedir}/functions.sh
+root_check_run_with_sudo "$@"
+
+# Parameter parsing via include
+source ${basedir}/parameters.sh
+# Set some default params, if they didn't get set
+[ -z "$DEST_IP" ]   && DEST_IP="198.18.0.42"
+[ -z "$DST_MAC" ]   && DST_MAC="90:e2:ba:ff:ff:ff"
+[ -z "$CLONE_SKB" ] && CLONE_SKB="0"
+
+# NOTICE:  Script specific settings
+# =======
+# Limiting the number of concurrent flows ($FLOWS)
+# and also set how many packets each flow contains ($FLOWLEN)
+#
+[ -z "$FLOWS" ]     && FLOWS="8000"
+[ -z "$FLOWLEN" ]   && FLOWLEN="10"
+
+# Base Config
+DELAY="0"  # Zero means max speed
+COUNT="0"  # Zero means indefinitely
+
+if [[ -n "$BURST" ]]; then
+    err 1 "Bursting not supported for this mode"
+fi
+
+# General cleanup everything since last run
+pg_ctrl "reset"
+
+# Threads are specified with parameter -t value in $THREADS
+for ((thread = 0; thread < $THREADS; thread++)); do
+    dev=${DEV}@${thread}
+
+    # Add remove all other devices and add_device $dev to thread
+    pg_thread $thread "rem_device_all"
+    pg_thread $thread "add_device" $dev
+
+    # Base config
+    pg_set $dev "flag QUEUE_MAP_CPU"
+    pg_set $dev "count $COUNT"
+    pg_set $dev "clone_skb $CLONE_SKB"
+    pg_set $dev "pkt_size $PKT_SIZE"
+    pg_set $dev "delay $DELAY"
+    pg_set $dev "flag NO_TIMESTAMP"
+
+    # Single destination
+    pg_set $dev "dst_mac $DST_MAC"
+    pg_set $dev "dst $DEST_IP"
+
+    # Randomize source IP-addresses
+    pg_set $dev "flag IPSRC_RND"
+    pg_set $dev "src_min 198.18.0.0"
+    pg_set $dev "src_max 198.19.255.255"
+
+    # Limit number of flows (max 65535)
+    pg_set $dev "flows $FLOWS"
+    #
+    # How many packets a flow will send, before flow "entry" is
+    # re-generated/setup.
+    pg_set $dev "flowlen $FLOWLEN"
+    #
+    # Flag FLOW_SEQ will cause $FLOWLEN packets from the same flow
+    # being send back-to-back, before next flow is selected
+    # incrementally.  This helps lookup caches, and is more realistic.
+    #
+    pg_set $dev "flag FLOW_SEQ"
+
+done
+
+# Run if user hits control-c
+function print_result() {
+    # Print results
+    for ((thread = 0; thread < $THREADS; thread++)); do
+       dev=${DEV}@${thread}
+       echo "Device: $dev"
+       cat /proc/net/pktgen/$dev | grep -A2 "Result:"
+    done
+}
+# trap keyboard interrupt (Ctrl-C)
+trap true SIGINT
+
+echo "Running... ctrl^C to stop" >&2
+pg_ctrl "start"
+
+print_result
diff --git a/samples/pktgen/pktgen_sample05_flow_per_thread.sh b/samples/pktgen/pktgen_sample05_flow_per_thread.sh
new file mode 100755 (executable)
index 0000000..32ad818
--- /dev/null
@@ -0,0 +1,81 @@
+#!/bin/bash
+#
+# Script will generate one flow per thread (-t N)
+#  - Same destination IP
+#  - Fake source IPs for each flow (fixed based on thread number)
+#
+# Useful for scale testing on receiver, to see whether silo'ing flows
+# works and scales.  For optimal scalability (on receiver) each
+# separate-flow should not access shared variables/data. This script
+# helps magnify any of these scaling issues by overloading the receiver.
+#
+basedir=`dirname $0`
+source ${basedir}/functions.sh
+root_check_run_with_sudo "$@"
+
+# Parameter parsing via include
+source ${basedir}/parameters.sh
+# Set some default params, if they didn't get set
+[ -z "$DEST_IP" ]   && DEST_IP="198.18.0.42"
+[ -z "$DST_MAC" ]   && DST_MAC="90:e2:ba:ff:ff:ff"
+[ -z "$CLONE_SKB" ] && CLONE_SKB="0"
+[ -z "$BURST" ]     && BURST=32
+
+
+# Base Config
+DELAY="0"  # Zero means max speed
+COUNT="0"  # Zero means indefinitely
+
+# General cleanup everything since last run
+pg_ctrl "reset"
+
+# Threads are specified with parameter -t value in $THREADS
+for ((thread = 0; thread < $THREADS; thread++)); do
+    dev=${DEV}@${thread}
+
+    # Add remove all other devices and add_device $dev to thread
+    pg_thread $thread "rem_device_all"
+    pg_thread $thread "add_device" $dev
+
+    # Base config
+    pg_set $dev "flag QUEUE_MAP_CPU"
+    pg_set $dev "count $COUNT"
+    pg_set $dev "clone_skb $CLONE_SKB"
+    pg_set $dev "pkt_size $PKT_SIZE"
+    pg_set $dev "delay $DELAY"
+    pg_set $dev "flag NO_TIMESTAMP"
+
+    # Single destination
+    pg_set $dev "dst_mac $DST_MAC"
+    pg_set $dev "dst $DEST_IP"
+
+    # Setup source IP-addresses based on thread number
+    pg_set $dev "src_min 198.18.$((thread+1)).1"
+    pg_set $dev "src_max 198.18.$((thread+1)).1"
+
+    # Setup burst, for easy testing -b 0 disable bursting
+    # (internally in pktgen default and minimum burst=1)
+    if [[ ${BURST} -ne 0 ]]; then
+       pg_set $dev "burst $BURST"
+    else
+       info "$dev: Not using burst"
+    fi
+
+done
+
+# Run if user hits control-c
+function print_result() {
+    # Print results
+    for ((thread = 0; thread < $THREADS; thread++)); do
+       dev=${DEV}@${thread}
+       echo "Device: $dev"
+       cat /proc/net/pktgen/$dev | grep -A2 "Result:"
+    done
+}
+# trap keyboard interrupt (Ctrl-C)
+trap true SIGINT
+
+echo "Running... ctrl^C to stop" >&2
+pg_ctrl "start"
+
+print_result
index cd129e6..8b00031 100644 (file)
@@ -13,9 +13,11 @@ quiet_cmd_gen_constants_py = GEN     $@
        $(CPP) -E -x c -P $(c_flags) $< > $@ ;\
        sed -i '1,/<!-- end-c-headers -->/d;' $@
 
-$(obj)/constants.py: $(SRCTREE)/$(obj)/constants.py.in
-       $(call if_changed,gen_constants_py)
+targets += constants.py
+$(obj)/constants.py: $(SRCTREE)/$(obj)/constants.py.in FORCE
+       $(call if_changed_dep,gen_constants_py)
 
 build_constants_py: $(obj)/constants.py
+       @:
 
 clean-files := *.pyc *.pyo $(if $(KBUILD_SRC),*.py) $(obj)/constants.py
index 07e6c2b..7986f4e 100644 (file)
@@ -14,7 +14,6 @@
 
 #include <linux/fs.h>
 #include <linux/mount.h>
-#include <linux/radix-tree.h>
 
 /* We need to stringify expanded macros so that they can be parsed */
 
@@ -51,9 +50,3 @@ LX_VALUE(MNT_NOEXEC)
 LX_VALUE(MNT_NOATIME)
 LX_VALUE(MNT_NODIRATIME)
 LX_VALUE(MNT_RELATIME)
-
-/* linux/radix-tree.h */
-LX_VALUE(RADIX_TREE_INDIRECT_PTR)
-LX_GDBPARSED(RADIX_TREE_HEIGHT_MASK)
-LX_GDBPARSED(RADIX_TREE_MAP_SHIFT)
-LX_GDBPARSED(RADIX_TREE_MAP_MASK)
diff --git a/scripts/gdb/linux/radixtree.py b/scripts/gdb/linux/radixtree.py
deleted file mode 100644 (file)
index 0fdef4e..0000000
+++ /dev/null
@@ -1,97 +0,0 @@
-#
-# gdb helper commands and functions for Linux kernel debugging
-#
-#  Radix Tree Parser
-#
-# Copyright (c) 2016 Linaro Ltd
-#
-# Authors:
-#  Kieran Bingham <kieran.bingham@linaro.org>
-#
-# This work is licensed under the terms of the GNU GPL version 2.
-#
-
-import gdb
-
-from linux import utils
-from linux import constants
-
-radix_tree_root_type = utils.CachedType("struct radix_tree_root")
-radix_tree_node_type = utils.CachedType("struct radix_tree_node")
-
-
-def is_indirect_ptr(node):
-    long_type = utils.get_long_type()
-    return (node.cast(long_type) & constants.LX_RADIX_TREE_INDIRECT_PTR)
-
-
-def indirect_to_ptr(node):
-    long_type = utils.get_long_type()
-    node_type = node.type
-    indirect_ptr = node.cast(long_type) & ~constants.LX_RADIX_TREE_INDIRECT_PTR
-    return indirect_ptr.cast(node_type)
-
-
-def maxindex(height):
-    height = height & constants.LX_RADIX_TREE_HEIGHT_MASK
-    return gdb.parse_and_eval("height_to_maxindex["+str(height)+"]")
-
-
-def lookup(root, index):
-    if root.type == radix_tree_root_type.get_type().pointer():
-        root = root.dereference()
-    elif root.type != radix_tree_root_type.get_type():
-        raise gdb.GdbError("Must be struct radix_tree_root not {}"
-                           .format(root.type))
-
-    node = root['rnode']
-    if node is 0:
-        return None
-
-    if not (is_indirect_ptr(node)):
-        if (index > 0):
-            return None
-        return node
-
-    node = indirect_to_ptr(node)
-
-    height = node['path'] & constants.LX_RADIX_TREE_HEIGHT_MASK
-    if (index > maxindex(height)):
-        return None
-
-    shift = (height-1) * constants.LX_RADIX_TREE_MAP_SHIFT
-
-    while True:
-        new_index = (index >> shift) & constants.LX_RADIX_TREE_MAP_MASK
-        slot = node['slots'][new_index]
-
-        node = slot.cast(node.type.pointer()).dereference()
-        if node is 0:
-            return None
-
-        shift -= constants.LX_RADIX_TREE_MAP_SHIFT
-        height -= 1
-
-        if (height <= 0):
-            break
-
-    return node
-
-
-class LxRadixTree(gdb.Function):
-    """ Lookup and return a node from a RadixTree.
-
-$lx_radix_tree_lookup(root_node [, index]): Return the node at the given index.
-If index is omitted, the root node is dereferenced and returned."""
-
-    def __init__(self):
-        super(LxRadixTree, self).__init__("lx_radix_tree_lookup")
-
-    def invoke(self, root, index=0):
-        result = lookup(root, index)
-        if result is None:
-            raise gdb.GdbError("No entry in tree at index {}".format(index))
-
-        return result
-
-LxRadixTree()
index 9a0f892..004b0ac 100644 (file)
@@ -153,7 +153,7 @@ lx-symbols command."""
             saved_state['breakpoint'].enabled = saved_state['enabled']
 
     def invoke(self, arg, from_tty):
-        self.module_paths = arg.split()
+        self.module_paths = [os.path.expanduser(p) for p in arg.split()]
         self.module_paths.append(os.getcwd())
 
         # enforce update
index 3a80ad6..6e0b0af 100644 (file)
@@ -31,4 +31,3 @@ else:
     import linux.lists
     import linux.proc
     import linux.constants
-    import linux.radixtree
index 2660fbc..7798e16 100644 (file)
@@ -500,34 +500,34 @@ static int apparmor_setprocattr(struct task_struct *task, char *name,
 {
        struct common_audit_data sa;
        struct apparmor_audit_data aad = {0,};
-       char *command, *args = value;
+       char *command, *largs = NULL, *args = value;
        size_t arg_size;
        int error;
 
        if (size == 0)
                return -EINVAL;
-       /* args points to a PAGE_SIZE buffer, AppArmor requires that
-        * the buffer must be null terminated or have size <= PAGE_SIZE -1
-        * so that AppArmor can null terminate them
-        */
-       if (args[size - 1] != '\0') {
-               if (size == PAGE_SIZE)
-                       return -EINVAL;
-               args[size] = '\0';
-       }
-
        /* task can only write its own attributes */
        if (current != task)
                return -EACCES;
 
-       args = value;
+       /* AppArmor requires that the buffer must be null terminated atm */
+       if (args[size - 1] != '\0') {
+               /* null terminate */
+               largs = args = kmalloc(size + 1, GFP_KERNEL);
+               if (!args)
+                       return -ENOMEM;
+               memcpy(args, value, size);
+               args[size] = '\0';
+       }
+
+       error = -EINVAL;
        args = strim(args);
        command = strsep(&args, " ");
        if (!args)
-               return -EINVAL;
+               goto out;
        args = skip_spaces(args);
        if (!*args)
-               return -EINVAL;
+               goto out;
 
        arg_size = size - (args - (char *) value);
        if (strcmp(name, "current") == 0) {
@@ -553,10 +553,12 @@ static int apparmor_setprocattr(struct task_struct *task, char *name,
                        goto fail;
        } else
                /* only support the "current" and "exec" process attributes */
-               return -EINVAL;
+               goto fail;
 
        if (!error)
                error = size;
+out:
+       kfree(largs);
        return error;
 
 fail:
@@ -565,9 +567,9 @@ fail:
        aad.profile = aa_current_profile();
        aad.op = OP_SETPROCATTR;
        aad.info = name;
-       aad.error = -EINVAL;
+       aad.error = error = -EINVAL;
        aa_audit_msg(AUDIT_APPARMOR_DENIED, &sa, NULL);
-       return -EINVAL;
+       goto out;
 }
 
 static int apparmor_task_setrlimit(struct task_struct *task,
index a85d455..b4fe9b0 100644 (file)
@@ -160,6 +160,8 @@ void snd_ctl_notify(struct snd_card *card, unsigned int mask,
        
        if (snd_BUG_ON(!card || !id))
                return;
+       if (card->shutdown)
+               return;
        read_lock(&card->ctl_files_rwlock);
 #if IS_ENABLED(CONFIG_SND_MIXER_OSS)
        card->mixer_oss_change_count++;
index 308c9ec..8e980aa 100644 (file)
@@ -849,6 +849,14 @@ int snd_pcm_new_internal(struct snd_card *card, const char *id, int device,
 }
 EXPORT_SYMBOL(snd_pcm_new_internal);
 
+static void free_chmap(struct snd_pcm_str *pstr)
+{
+       if (pstr->chmap_kctl) {
+               snd_ctl_remove(pstr->pcm->card, pstr->chmap_kctl);
+               pstr->chmap_kctl = NULL;
+       }
+}
+
 static void snd_pcm_free_stream(struct snd_pcm_str * pstr)
 {
        struct snd_pcm_substream *substream, *substream_next;
@@ -871,6 +879,7 @@ static void snd_pcm_free_stream(struct snd_pcm_str * pstr)
                kfree(setup);
        }
 #endif
+       free_chmap(pstr);
        if (pstr->substream_count)
                put_device(&pstr->dev);
 }
@@ -1135,10 +1144,7 @@ static int snd_pcm_dev_disconnect(struct snd_device *device)
        for (cidx = 0; cidx < 2; cidx++) {
                if (!pcm->internal)
                        snd_unregister_device(&pcm->streams[cidx].dev);
-               if (pcm->streams[cidx].chmap_kctl) {
-                       snd_ctl_remove(pcm->card, pcm->streams[cidx].chmap_kctl);
-                       pcm->streams[cidx].chmap_kctl = NULL;
-               }
+               free_chmap(&pcm->streams[cidx]);
        }
        mutex_unlock(&pcm->open_mutex);
        mutex_unlock(&register_mutex);
index e320c44..6f8ea13 100644 (file)
@@ -1218,8 +1218,10 @@ static int azx_free(struct azx *chip)
        if (use_vga_switcheroo(hda)) {
                if (chip->disabled && hda->probe_continued)
                        snd_hda_unlock_devices(&chip->bus);
-               if (hda->vga_switcheroo_registered)
+               if (hda->vga_switcheroo_registered) {
                        vga_switcheroo_unregister_client(chip->pci);
+                       vga_switcheroo_fini_domain_pm_ops(chip->card->dev);
+               }
        }
 
        if (bus->chip_init) {
@@ -2267,6 +2269,8 @@ static const struct pci_device_id azx_ids[] = {
          .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
        { PCI_DEVICE(0x1002, 0x157a),
          .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+       { PCI_DEVICE(0x1002, 0x15b3),
+         .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
        { PCI_DEVICE(0x1002, 0x793b),
          .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
        { PCI_DEVICE(0x1002, 0x7919),
index 5fac786..abcb5a6 100644 (file)
@@ -5738,7 +5738,6 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
        {}
 };
 #define ALC225_STANDARD_PINS \
-       {0x12, 0xb7a60130}, \
        {0x21, 0x04211020}
 
 #define ALC256_STANDARD_PINS \
@@ -5763,10 +5762,24 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
 static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
        SND_HDA_PIN_QUIRK(0x10ec0225, 0x1028, "Dell", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
                ALC225_STANDARD_PINS,
+               {0x12, 0xb7a60130},
                {0x14, 0x901701a0}),
        SND_HDA_PIN_QUIRK(0x10ec0225, 0x1028, "Dell", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
                ALC225_STANDARD_PINS,
+               {0x12, 0xb7a60130},
                {0x14, 0x901701b0}),
+       SND_HDA_PIN_QUIRK(0x10ec0225, 0x1028, "Dell", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
+               ALC225_STANDARD_PINS,
+               {0x12, 0xb7a60150},
+               {0x14, 0x901701a0}),
+       SND_HDA_PIN_QUIRK(0x10ec0225, 0x1028, "Dell", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
+               ALC225_STANDARD_PINS,
+               {0x12, 0xb7a60150},
+               {0x14, 0x901701b0}),
+       SND_HDA_PIN_QUIRK(0x10ec0225, 0x1028, "Dell", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
+               ALC225_STANDARD_PINS,
+               {0x12, 0xb7a60130},
+               {0x1b, 0x90170110}),
        SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL2_MIC_NO_PRESENCE,
                {0x14, 0x90170110},
                {0x21, 0x02211020}),
index 69860da..9e5276d 100644 (file)
@@ -556,7 +556,6 @@ static int usb_audio_probe(struct usb_interface *intf,
                                goto __error;
                        }
                        chip = usb_chip[i];
-                       dev_set_drvdata(&dev->dev, chip);
                        atomic_inc(&chip->active); /* avoid autopm */
                        break;
                }
@@ -582,6 +581,7 @@ static int usb_audio_probe(struct usb_interface *intf,
                        goto __error;
                }
        }
+       dev_set_drvdata(&dev->dev, chip);
 
        /*
         * For devices with more than one control interface, we assume the
diff --git a/tools/hv/bondvf.sh b/tools/hv/bondvf.sh
new file mode 100755 (executable)
index 0000000..8e96023
--- /dev/null
@@ -0,0 +1,193 @@
+#!/bin/bash
+
+# This example script creates bonding network devices based on synthetic NIC
+# (the virtual network adapter usually provided by Hyper-V) and the matching
+# VF NIC (SRIOV virtual function). So the synthetic NIC and VF NIC can
+# function as one network device, and fail over to the synthetic NIC if VF is
+# down.
+#
+# Usage:
+# - After configured vSwitch and vNIC with SRIOV, start Linux virtual
+#   machine (VM)
+# - Run this scripts on the VM. It will create configuration files in
+#   distro specific directory.
+# - Reboot the VM, so that the bonding config are enabled.
+#
+# The config files are DHCP by default. You may edit them if you need to change
+# to Static IP or change other settings.
+#
+
+sysdir=/sys/class/net
+netvsc_cls={f8615163-df3e-46c5-913f-f2d2f965ed0e}
+bondcnt=0
+
+# Detect Distro
+if [ -f /etc/redhat-release ];
+then
+       cfgdir=/etc/sysconfig/network-scripts
+       distro=redhat
+elif grep -q 'Ubuntu' /etc/issue
+then
+       cfgdir=/etc/network
+       distro=ubuntu
+elif grep -q 'SUSE' /etc/issue
+then
+       cfgdir=/etc/sysconfig/network
+       distro=suse
+else
+       echo "Unsupported Distro"
+       exit 1
+fi
+
+echo Detected Distro: $distro, or compatible
+
+# Get a list of ethernet names
+list_eth=(`cd $sysdir && ls -d */ | cut -d/ -f1 | grep -v bond`)
+eth_cnt=${#list_eth[@]}
+
+echo List of net devices:
+
+# Get the MAC addresses
+for (( i=0; i < $eth_cnt; i++ ))
+do
+       list_mac[$i]=`cat $sysdir/${list_eth[$i]}/address`
+       echo ${list_eth[$i]}, ${list_mac[$i]}
+done
+
+# Find NIC with matching MAC
+for (( i=0; i < $eth_cnt-1; i++ ))
+do
+       for (( j=i+1; j < $eth_cnt; j++ ))
+       do
+               if [ "${list_mac[$i]}" = "${list_mac[$j]}" ]
+               then
+                       list_match[$i]=${list_eth[$j]}
+                       break
+               fi
+       done
+done
+
+function create_eth_cfg_redhat {
+       local fn=$cfgdir/ifcfg-$1
+
+       rm -f $fn
+       echo DEVICE=$1 >>$fn
+       echo TYPE=Ethernet >>$fn
+       echo BOOTPROTO=none >>$fn
+       echo ONBOOT=yes >>$fn
+       echo NM_CONTROLLED=no >>$fn
+       echo PEERDNS=yes >>$fn
+       echo IPV6INIT=yes >>$fn
+       echo MASTER=$2 >>$fn
+       echo SLAVE=yes >>$fn
+}
+
+function create_eth_cfg_pri_redhat {
+       create_eth_cfg_redhat $1 $2
+}
+
+function create_bond_cfg_redhat {
+       local fn=$cfgdir/ifcfg-$1
+
+       rm -f $fn
+       echo DEVICE=$1 >>$fn
+       echo TYPE=Bond >>$fn
+       echo BOOTPROTO=dhcp >>$fn
+       echo ONBOOT=yes >>$fn
+       echo NM_CONTROLLED=no >>$fn
+       echo PEERDNS=yes >>$fn
+       echo IPV6INIT=yes >>$fn
+       echo BONDING_MASTER=yes >>$fn
+       echo BONDING_OPTS=\"mode=active-backup miimon=100 primary=$2\" >>$fn
+}
+
+function create_eth_cfg_ubuntu {
+       local fn=$cfgdir/interfaces
+
+       echo $'\n'auto $1 >>$fn
+       echo iface $1 inet manual >>$fn
+       echo bond-master $2 >>$fn
+}
+
+function create_eth_cfg_pri_ubuntu {
+       local fn=$cfgdir/interfaces
+
+       create_eth_cfg_ubuntu $1 $2
+       echo bond-primary $1 >>$fn
+}
+
+function create_bond_cfg_ubuntu {
+       local fn=$cfgdir/interfaces
+
+       echo $'\n'auto $1 >>$fn
+       echo iface $1 inet dhcp >>$fn
+       echo bond-mode active-backup >>$fn
+       echo bond-miimon 100 >>$fn
+       echo bond-slaves none >>$fn
+}
+
+function create_eth_cfg_suse {
+        local fn=$cfgdir/ifcfg-$1
+
+        rm -f $fn
+       echo BOOTPROTO=none >>$fn
+       echo STARTMODE=auto >>$fn
+}
+
+function create_eth_cfg_pri_suse {
+       create_eth_cfg_suse $1
+}
+
+function create_bond_cfg_suse {
+       local fn=$cfgdir/ifcfg-$1
+
+       rm -f $fn
+       echo BOOTPROTO=dhcp >>$fn
+       echo STARTMODE=auto >>$fn
+       echo BONDING_MASTER=yes >>$fn
+       echo BONDING_SLAVE_0=$2 >>$fn
+       echo BONDING_SLAVE_1=$3 >>$fn
+       echo BONDING_MODULE_OPTS=\'mode=active-backup miimon=100 primary=$2\' >>$fn
+}
+
+function create_bond {
+       local bondname=bond$bondcnt
+       local primary
+       local secondary
+
+       local class_id1=`cat $sysdir/$1/device/class_id 2>/dev/null`
+       local class_id2=`cat $sysdir/$2/device/class_id 2>/dev/null`
+
+       if [ "$class_id1" = "$netvsc_cls" ]
+       then
+               primary=$2
+               secondary=$1
+       elif [ "$class_id2" = "$netvsc_cls" ]
+       then
+               primary=$1
+               secondary=$2
+       else
+               return 0
+       fi
+
+       echo $'\nBond name:' $bondname
+
+       echo configuring $primary
+       create_eth_cfg_pri_$distro $primary $bondname
+
+       echo configuring $secondary
+       create_eth_cfg_$distro $secondary $bondname
+
+       echo creating: $bondname with primary slave: $primary
+       create_bond_cfg_$distro $bondname $primary $secondary
+
+       let bondcnt=bondcnt+1
+}
+
+for (( i=0; i < $eth_cnt-1; i++ ))
+do
+        if [ -n "${list_match[$i]}" ]
+        then
+               create_bond ${list_eth[$i]} ${list_match[$i]}
+        fi
+done
index e8a1e69..25d8031 100644 (file)
@@ -122,10 +122,14 @@ static bool ignore_func(struct objtool_file *file, struct symbol *func)
 
        /* check for STACK_FRAME_NON_STANDARD */
        if (file->whitelist && file->whitelist->rela)
-               list_for_each_entry(rela, &file->whitelist->rela->rela_list, list)
-                       if (rela->sym->sec == func->sec &&
+               list_for_each_entry(rela, &file->whitelist->rela->rela_list, list) {
+                       if (rela->sym->type == STT_SECTION &&
+                           rela->sym->sec == func->sec &&
                            rela->addend == func->offset)
                                return true;
+                       if (rela->sym->type == STT_FUNC && rela->sym == func)
+                               return true;
+               }
 
        /* check if it has a context switching instruction */
        func_for_each_insn(file, func, insn)
index 4d21ef2..4c6f09a 100644 (file)
@@ -252,9 +252,10 @@ def irq__irq_handler_exit(name, context, cpu, sec, nsec, pid, comm, callchain, i
        event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm, irq, ret)
        all_event_list.append(event_info)
 
-def napi__napi_poll(name, context, cpu, sec, nsec, pid, comm, callchain, napi, dev_name):
+def napi__napi_poll(name, context, cpu, sec, nsec, pid, comm, callchain, napi,
+                    dev_name, work=None, budget=None):
        event_info = (name, context, cpu, nsecs(sec, nsec), pid, comm,
-                       napi, dev_name)
+                       napi, dev_name, work, budget)
        all_event_list.append(event_info)
 
 def net__netif_receive_skb(name, context, cpu, sec, nsec, pid, comm, callchain, skbaddr,
@@ -354,11 +355,13 @@ def handle_irq_softirq_exit(event_info):
        receive_hunk_list.append(rec_data)
 
 def handle_napi_poll(event_info):
-       (name, context, cpu, time, pid, comm, napi, dev_name) = event_info
+       (name, context, cpu, time, pid, comm, napi, dev_name,
+               work, budget) = event_info
        if cpu in net_rx_dic.keys():
                event_list = net_rx_dic[cpu]['event_list']
                rec_data = {'event_name':'napi_poll',
-                               'dev':dev_name, 'event_t':time}
+                               'dev':dev_name, 'event_t':time,
+                               'work':work, 'budget':budget}
                event_list.append(rec_data)
 
 def handle_netif_rx(event_info):
index b7447ce..b0ac057 100644 (file)
@@ -122,7 +122,7 @@ enum {
        NODE_TAGGED = 2,
 };
 
-#define THRASH_SIZE            1000 * 1000
+#define THRASH_SIZE            (1000 * 1000)
 #define N 127
 #define BATCH  33
 
index 7cf6e17..b9d34b3 100644 (file)
@@ -510,10 +510,11 @@ static void slab_stats(struct slabinfo *s)
                        s->alloc_node_mismatch, (s->alloc_node_mismatch * 100) / total);
        }
 
-       if (s->cmpxchg_double_fail || s->cmpxchg_double_cpu_fail)
+       if (s->cmpxchg_double_fail || s->cmpxchg_double_cpu_fail) {
                printf("\nCmpxchg_double Looping\n------------------------\n");
                printf("Locked Cmpxchg Double redos   %lu\nUnlocked Cmpxchg Double redos %lu\n",
                        s->cmpxchg_double_fail, s->cmpxchg_double_cpu_fail);
+       }
 }
 
 static void report(struct slabinfo *s)