commit dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b Author: Greg Kroah-Hartman Date: Thu Feb 20 10:46:04 2014 -0800 Linux 3.4.81 commit 478b97d4d7c5468e2ea21ad34d344a4dac5ff230 Author: Jeff Layton Date: Thu Aug 2 14:30:56 2012 -0400 nfs: tear down caches in nfs_init_writepagecache when allocation fails commit 3dd4765fce04c0b4af1e0bc4c0b10f906f95fabc upstream. ...and ensure that we tear down the nfs_commit_data cache too when unloading the module. Cc: Bryan Schumaker Signed-off-by: Jeff Layton Signed-off-by: Trond Myklebust [bwh: Backported to 3.2: drop the nfs_cdata_cachep cleanup; it doesn't exist] Signed-off-by: Ben Hutchings Cc: Li Zefan Signed-off-by: Greg Kroah-Hartman commit 26fead641f8e2a5052aa3cfc88caf876f0e84941 Author: Dan Rosenberg Date: Mon Jul 30 14:40:26 2012 -0700 lib/vsprintf.c: kptr_restrict: fix pK-error in SysRq show-all-timers(Q) commit 3715c5309f6d175c3053672b73fd4f73be16fd07 upstream. When using ALT+SysRq+Q all the pointers are replaced with "pK-error" like this: [23153.208033] .base: pK-error with echo h > /proc/sysrq-trigger it works: [23107.776363] .base: ffff88023e60d540 The intent behind this behavior was to return "pK-error" in cases where the %pK format specifier was used in interrupt context, because the CAP_SYSLOG check wouldn't be meaningful. Clearly this should only apply when kptr_restrict is actually enabled though. Reported-by: Stevie Trujillo Signed-off-by: Dan Rosenberg Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Ben Hutchings Cc: Li Zefan Signed-off-by: Greg Kroah-Hartman commit 374d3a4168761631fe309e2bda8dc3bfb4eadcb6 Author: Asias He Date: Fri May 25 16:03:27 2012 +0800 virtio-blk: Use block layer provided spinlock commit 2c95a3290919541b846bee3e0fbaa75860929f53 upstream. Block layer will allocate a spinlock for the queue if the driver does not provide one in blk_init_queue(). The reason to use the internal spinlock is that blk_cleanup_queue() will switch to use the internal spinlock in the cleanup code path. if (q->queue_lock != &q->__queue_lock) q->queue_lock = &q->__queue_lock; However, processes which are in D state might have taken the driver provided spinlock, when the processes wake up, they would release the block provided spinlock. ===================================== [ BUG: bad unlock balance detected! ] 3.4.0-rc7+ #238 Not tainted ------------------------------------- fio/3587 is trying to release lock (&(&q->__queue_lock)->rlock) at: [] blk_queue_bio+0x2a2/0x380 but there are no more locks to release! other info that might help us debug this: 1 lock held by fio/3587: #0: (&(&vblk->lock)->rlock){......}, at: [] get_request_wait+0x19a/0x250 Other drivers use block layer provided spinlock as well, e.g. SCSI. Switching to the block layer provided spinlock saves a bit of memory and does not increase lock contention. Performance test shows no real difference is observed before and after this patch. Changes in v2: Improve commit log as Michael suggested. Cc: virtualization@lists.linux-foundation.org Cc: kvm@vger.kernel.org Signed-off-by: Asias He Acked-by: Michael S. Tsirkin Signed-off-by: Rusty Russell [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings Cc: Li Zefan Signed-off-by: Greg Kroah-Hartman commit ba37c708e4bb32505bc1c88a5a6ac1a434cfaee0 Author: Seth Forshee Date: Tue Jul 24 23:54:11 2012 -0700 Input: synaptics - handle out of bounds values from the hardware commit c0394506e69b37c47d391c2a7bbea3ea236d8ec8 upstream. The touchpad on the Acer Aspire One D250 will report out of range values in the extreme lower portion of the touchpad. These appear as abrupt changes in the values reported by the hardware from very low values to very high values, which can cause unexpected vertical jumps in the position of the mouse pointer. What seems to be happening is that the value is wrapping to a two's compliment negative value of higher resolution than the 13-bit value reported by the hardware, with the high-order bits being truncated. This patch adds handling for these values by converting them to the appropriate negative values. The only tricky part about this is deciding when to treat a number as negative. It stands to reason that if out of range values can be reported on the low end then it could also happen on the high end, so not all out of range values should be treated as negative. The approach taken here is to split the difference between the maximum legitimate value for the axis and the maximum possible value that the hardware can report, treating values greater than this number as negative and all other values as positive. This can be tweaked later if hardware is found that operates outside of these parameters. BugLink: http://bugs.launchpad.net/bugs/1001251 Signed-off-by: Seth Forshee Reviewed-by: Daniel Kurtz Signed-off-by: Dmitry Torokhov [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings Cc: Li Zefan Signed-off-by: Greg Kroah-Hartman commit b249f99c023402a192e1af812326c5092e64175f Author: Bojan Smojver Date: Sun Apr 29 22:42:06 2012 +0200 PM / Hibernate: Hibernate/thaw fixes/improvements commit 5a21d489fd9541a4a66b9a500659abaca1b19a51 upstream. 1. Do not allocate memory for buffers from emergency pools, unless absolutely required. Do not warn about and do not retry non-essential failed allocations. 2. Do not check the amount of free pages left on every single page write, but wait until one map is completely populated and then check. 3. Set maximum number of pages for read buffering consistently, instead of inadvertently depending on the size of the sector type. 4. Fix copyright line, which I missed when I submitted the hibernation threading patch. 5. Dispense with bit shifting arithmetic to improve readability. 6. Really recalculate the number of pages required to be free after all allocations have been done. 7. Fix calculation of pages required for read buffering. Only count in pages that do not belong to high memory. Signed-off-by: Bojan Smojver Signed-off-by: Rafael J. Wysocki Signed-off-by: Ben Hutchings Cc: Li Zefan Signed-off-by: Greg Kroah-Hartman commit ec90b6113735a9f9fcb64a604b28f446fe03ded2 Author: Avi Kivity Date: Sun Apr 22 17:02:11 2012 +0300 KVM: Fix buffer overflow in kvm_set_irq() commit f2ebd422f71cda9c791f76f85d2ca102ae34a1ed upstream. kvm_set_irq() has an internal buffer of three irq routing entries, allowing connecting a GSI to three IRQ chips or on MSI. However setup_routing_entry() does not properly enforce this, allowing three irqchip routes followed by an MSI route to overflow the buffer. Fix by ensuring that an MSI entry is added to an empty list. Signed-off-by: Avi Kivity Signed-off-by: Ben Hutchings Cc: Li Zefan Signed-off-by: Greg Kroah-Hartman commit 230a0c3b5881b5493c06d382d799305b4ee603c5 Author: Nicholas Bellinger Date: Sat Sep 29 17:15:37 2012 -0700 target/file: Re-enable optional fd_buffered_io=1 operation commit b32f4c7ed85c5cee2a21a55c9f59ebc9d57a2463 upstream. This patch re-adds the ability to optionally run in buffered FILEIO mode (eg: w/o O_DSYNC) for device backends in order to once again use the Linux buffered cache as a write-back storage mechanism. This logic was originally dropped with mainline v3.5-rc commit: commit a4dff3043c231d57f982af635c9d2192ee40e5ae Author: Nicholas Bellinger Date: Wed May 30 16:25:41 2012 -0700 target/file: Use O_DSYNC by default for FILEIO backends This difference with this patch is that fd_create_virtdevice() now forces the explicit setting of emulate_write_cache=1 when buffered FILEIO operation has been enabled. (v2: Switch to FDBD_HAS_BUFFERED_IO_WCE + add more detailed comment as requested by hch) Reported-by: Ferry Cc: Christoph Hellwig Signed-off-by: Nicholas Bellinger Signed-off-by: Ben Hutchings Cc: Li Zefan Signed-off-by: Greg Kroah-Hartman commit 45a0374fd5ee6c27349dc41e31db0bd0c87663ca Author: Nicholas Bellinger Date: Wed May 30 16:25:41 2012 -0700 target/file: Use O_DSYNC by default for FILEIO backends commit a4dff3043c231d57f982af635c9d2192ee40e5ae upstream. Convert to use O_DSYNC for all cases at FILEIO backend creation time to avoid the extra syncing of pure timestamp updates with legacy O_SYNC during default operation as recommended by hch. Continue to do this independently of Write Cache Enable (WCE) bit, as WCE=0 is currently the default for all backend devices and enabled by user on per device basis via attrib/emulate_write_cache. This patch drops the now unnecessary fd_buffered_io= token usage that was originally signalling when to explictly disable O_SYNC at backend creation time for buffered I/O operation. This can end up being dangerous for a number of reasons during physical node failure, so go ahead and drop this option for now when O_DSYNC is used as the default. Also allow explict FUA WRITEs -> vfs_fsync_range() call to function in fd_execute_cmd() independently of WCE bit setting. Reported-by: Christoph Hellwig Cc: Linus Torvalds Signed-off-by: Nicholas Bellinger Signed-off-by: Ben Hutchings [bwh: Backported to 3.2: - We have fd_do_task() and not fd_execute_cmd() - Various fields are in struct se_task rather than struct se_cmd - fd_create_virtdevice() flags initialisation hasn't been cleaned up] Cc: Li Zefan Signed-off-by: Greg Kroah-Hartman commit f7441069a08718df307a9022095d53e558dcf2da Author: Jan Kara Date: Fri Oct 4 09:29:12 2013 -0400 IB/qib: Convert qib_user_sdma_pin_pages() to use get_user_pages_fast() commit 603e7729920e42b3c2f4dbfab9eef4878cb6e8fa upstream. qib_user_sdma_queue_pkts() gets called with mmap_sem held for writing. Except for get_user_pages() deep down in qib_user_sdma_pin_pages() we don't seem to need mmap_sem at all. Even more interestingly the function qib_user_sdma_queue_pkts() (and also qib_user_sdma_coalesce() called somewhat later) call copy_from_user() which can hit a page fault and we deadlock on trying to get mmap_sem when handling that fault. So just make qib_user_sdma_pin_pages() use get_user_pages_fast() and leave mmap_sem locking for mm. This deadlock has actually been observed in the wild when the node is under memory pressure. Reviewed-by: Mike Marciniszyn Signed-off-by: Jan Kara Signed-off-by: Roland Dreier [Backported to 3.4: (Thank to Ben Hutchings) - Adjust context - Adjust indentation and nr_pages argument in qib_user_sdma_pin_pages()] Signed-off-by: Ben Hutchings Signed-off-by: Mike Marciniszyn Signed-off-by: Greg Kroah-Hartman commit f5a4c4b79e57f875b6788f6f8352ca246bfd8450 Author: Peter Zijlstra Date: Thu May 17 17:15:29 2012 +0200 sched/nohz: Fix rq->cpu_load calculations some more commit 5aaa0b7a2ed5b12692c9ffb5222182bd558d3146 upstream. Follow up on commit 556061b00 ("sched/nohz: Fix rq->cpu_load[] calculations") since while that fixed the busy case it regressed the mostly idle case. Add a callback from the nohz exit to also age the rq->cpu_load[] array. This closes the hole where either there was no nohz load balance pass during the nohz, or there was a 'significant' amount of idle time between the last nohz balance and the nohz exit. So we'll update unconditionally from the tick to not insert any accidental 0 load periods while busy, and we try and catch up from nohz idle balance and nohz exit. Both these are still prone to missing a jiffy, but that has always been the case. Signed-off-by: Peter Zijlstra Cc: pjt@google.com Cc: Venkatesh Pallipadi Link: http://lkml.kernel.org/n/tip-kt0trz0apodbf84ucjfdbr1a@git.kernel.org Signed-off-by: Ingo Molnar Cc: Li Zefan Signed-off-by: Greg Kroah-Hartman commit e2d51f27e382be7b70a755f3ea2fbbeacdb50834 Author: Peter Zijlstra Date: Fri May 11 17:31:26 2012 +0200 sched/nohz: Fix rq->cpu_load[] calculations commit 556061b00c9f2fd6a5524b6bde823ef12f299ecf upstream. While investigating why the load-balancer did funny I found that the rq->cpu_load[] tables were completely screwy.. a bit more digging revealed that the updates that got through were missing ticks followed by a catchup of 2 ticks. The catchup assumes the cpu was idle during that time (since only nohz can cause missed ticks and the machine is idle etc..) this means that esp. the higher indices were significantly lower than they ought to be. The reason for this is that its not correct to compare against jiffies on every jiffy on any other cpu than the cpu that updates jiffies. This patch cludges around it by only doing the catch-up stuff from nohz_idle_balance() and doing the regular stuff unconditionally from the tick. Signed-off-by: Peter Zijlstra Cc: pjt@google.com Cc: Venkatesh Pallipadi Link: http://lkml.kernel.org/n/tip-tp4kj18xdd5aj4vvj0qg55s2@git.kernel.org Signed-off-by: Ingo Molnar Cc: Li Zefan Signed-off-by: Greg Kroah-Hartman commit 1c2bd0db1189643691557ff34406906b053cef92 Author: Steven Rostedt Date: Tue Feb 11 14:50:01 2014 -0500 ftrace: Have function graph only trace based on global_ops filters commit 23a8e8441a0a74dd612edf81dc89d1600bc0a3d1 upstream. Doing some different tests, I discovered that function graph tracing, when filtered via the set_ftrace_filter and set_ftrace_notrace files, does not always keep with them if another function ftrace_ops is registered to trace functions. The reason is that function graph just happens to trace all functions that the function tracer enables. When there was only one user of function tracing, the function graph tracer did not need to worry about being called by functions that it did not want to trace. But now that there are other users, this becomes a problem. For example, one just needs to do the following: # cd /sys/kernel/debug/tracing # echo schedule > set_ftrace_filter # echo function_graph > current_tracer # cat trace [..] 0) | schedule() { ------------------------------------------ 0) -0 => rcu_pre-7 ------------------------------------------ 0) ! 2980.314 us | } 0) | schedule() { ------------------------------------------ 0) rcu_pre-7 => -0 ------------------------------------------ 0) + 20.701 us | } # echo 1 > /proc/sys/kernel/stack_tracer_enabled # cat trace [..] 1) + 20.825 us | } 1) + 21.651 us | } 1) + 30.924 us | } /* SyS_ioctl */ 1) | do_page_fault() { 1) | __do_page_fault() { 1) 0.274 us | down_read_trylock(); 1) 0.098 us | find_vma(); 1) | handle_mm_fault() { 1) | _raw_spin_lock() { 1) 0.102 us | preempt_count_add(); 1) 0.097 us | do_raw_spin_lock(); 1) 2.173 us | } 1) | do_wp_page() { 1) 0.079 us | vm_normal_page(); 1) 0.086 us | reuse_swap_page(); 1) 0.076 us | page_move_anon_rmap(); 1) | unlock_page() { 1) 0.082 us | page_waitqueue(); 1) 0.086 us | __wake_up_bit(); 1) 1.801 us | } 1) 0.075 us | ptep_set_access_flags(); 1) | _raw_spin_unlock() { 1) 0.098 us | do_raw_spin_unlock(); 1) 0.105 us | preempt_count_sub(); 1) 1.884 us | } 1) 9.149 us | } 1) + 13.083 us | } 1) 0.146 us | up_read(); When the stack tracer was enabled, it enabled all functions to be traced, which now the function graph tracer also traces. This is a side effect that should not occur. To fix this a test is added when the function tracing is changed, as well as when the graph tracer is enabled, to see if anything other than the ftrace global_ops function tracer is enabled. If so, then the graph tracer calls a test trampoline that will look at the function that is being traced and compare it with the filters defined by the global_ops. As an optimization, if there's no other function tracers registered, or if the only registered function tracers also use the global ops, the function graph infrastructure will call the registered function graph callback directly and not go through the test trampoline. Fixes: d2d45c7a03a2 "tracing: Have stack_tracer use a separate list of functions" Signed-off-by: Steven Rostedt Signed-off-by: Greg Kroah-Hartman commit 2955866584c57545061dc38dc69de3461437aa9a Author: Steven Rostedt Date: Tue Feb 11 14:49:37 2014 -0500 ftrace: Fix synchronization location disabling and freeing ftrace_ops commit a4c35ed241129dd142be4cadb1e5a474a56d5464 upstream. The synchronization needed after ftrace_ops are unregistered must happen after the callback is disabled from becing called by functions. The current location happens after the function is being removed from the internal lists, but not after the function callbacks were disabled, leaving the functions susceptible of being called after their callbacks are freed. This affects perf and any externel users of function tracing (LTTng and SystemTap). Fixes: cdbe61bfe704 "ftrace: Allow dynamically allocated function tracers" Signed-off-by: Steven Rostedt Signed-off-by: Greg Kroah-Hartman commit 95bcd16ee7ce1cfb3fea853e38023f65d9d21c7c Author: Steven Rostedt Date: Tue Feb 11 14:49:07 2014 -0500 ftrace: Synchronize setting function_trace_op with ftrace_trace_function commit 405e1d834807e51b2ebd3dea81cb51e53fb61504 upstream. [ Partial commit backported to 3.4. The ftrace_sync() code by this is required for other fixes that 3.4 needs. ] ftrace_trace_function is a variable that holds what function will be called directly by the assembly code (mcount). If just a single function is registered and it handles recursion itself, then the assembly will call that function directly without any helper function. It also passes in the ftrace_op that was registered with the callback. The ftrace_op to send is stored in the function_trace_op variable. The ftrace_trace_function and function_trace_op needs to be coordinated such that the called callback wont be called with the wrong ftrace_op, otherwise bad things can happen if it expected a different op. Luckily, there's no callback that doesn't use the helper functions that requires this. But there soon will be and this needs to be fixed. Use a set_function_trace_op to store the ftrace_op to set the function_trace_op to when it is safe to do so (during the update function within the breakpoint or stop machine calls). Or if dynamic ftrace is not being used (static tracing) then we have to do a bit more synchronization when the ftrace_trace_function is set as that takes affect immediately (as oppose to dynamic ftrace doing it with the modification of the trampoline). Signed-off-by: Steven Rostedt Signed-off-by: Greg Kroah-Hartman commit a7333f3d237f3007d14a2ee0456b96a4b33522d0 Author: Mikulas Patocka Date: Mon Jan 13 19:37:54 2014 -0500 dm sysfs: fix a module unload race commit 2995fa78e423d7193f3b57835f6c1c75006a0315 upstream. This reverts commit be35f48610 ("dm: wait until embedded kobject is released before destroying a device") and provides an improved fix. The kobject release code that calls the completion must be placed in a non-module file, otherwise there is a module unload race (if the process calling dm_kobject_release is preempted and the DM module unloaded after the completion is triggered, but before dm_kobject_release returns). To fix this race, this patch moves the completion code to dm-builtin.c which is always compiled directly into the kernel if BLK_DEV_DM is selected. The patch introduces a new dm_kobject_holder structure, its purpose is to keep the completion and kobject in one place, so that it can be accessed from non-module code without the need to export the layout of struct mapped_device to that code. Signed-off-by: Mikulas Patocka Signed-off-by: Mike Snitzer Signed-off-by: Greg Kroah-Hartman commit 3fea8b0a9f978cce6d0685464d4e8acb9bbd1acc Author: Xishi Qiu Date: Tue Jul 31 16:43:19 2012 -0700 mm: setup pageblock_order before it's used by sparsemem commit ca57df79d4f64e1a4886606af4289d40636189c5 upstream. On architectures with CONFIG_HUGETLB_PAGE_SIZE_VARIABLE set, such as Itanium, pageblock_order is a variable with default value of 0. It's set to the right value by set_pageblock_order() in function free_area_init_core(). But pageblock_order may be used by sparse_init() before free_area_init_core() is called along path: sparse_init() ->sparse_early_usemaps_alloc_node() ->usemap_size() ->SECTION_BLOCKFLAGS_BITS ->((1UL << (PFN_SECTION_SHIFT - pageblock_order)) * NR_PAGEBLOCK_BITS) The uninitialized pageblock_size will cause memory wasting because usemap_size() returns a much bigger value then it's really needed. For example, on an Itanium platform, sparse_init() pageblock_order=0 usemap_size=24576 free_area_init_core() before pageblock_order=0, usemap_size=24576 free_area_init_core() after pageblock_order=12, usemap_size=8 That means 24K memory has been wasted for each section, so fix it by calling set_pageblock_order() from sparse_init(). Signed-off-by: Xishi Qiu Signed-off-by: Jiang Liu Cc: Tony Luck Cc: Yinghai Lu Cc: KAMEZAWA Hiroyuki Cc: Benjamin Herrenschmidt Cc: KOSAKI Motohiro Cc: David Rientjes Cc: Keping Chen Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [lizf: Backported to 3.4: adjust context] Signed-off-by: Li Zefan Signed-off-by: Greg Kroah-Hartman commit 237597d8f73155bba781acf3f4f558f6ed08a8ee Author: Andrew Morton Date: Tue May 29 15:06:31 2012 -0700 mm/page_alloc.c: remove pageblock_default_order() commit 955c1cd7401565671b064e499115344ec8067dfd upstream. This has always been broken: one version takes an unsigned int and the other version takes no arguments. This bug was hidden because one version of set_pageblock_order() was a macro which doesn't evaluate its argument. Simplify it all and remove pageblock_default_order() altogether. Reported-by: rajman mekaco Cc: Mel Gorman Cc: KAMEZAWA Hiroyuki Cc: Tejun Heo Cc: Minchan Kim Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds [lizf: Backported to 3.4: adjust context] Signed-off-by: Li Zefan Signed-off-by: Greg Kroah-Hartman commit dbdd2eb440291a1d71e54cfe0e6465402d81abc3 Author: Daniel Vetter Date: Sun Jul 1 17:09:42 2012 +0200 drm/i915: kick any firmware framebuffers before claiming the gtt commit 9f846a16d213523fbe6daea17e20df6b8ac5a1e5 upstream. Especially vesafb likes to map everything as uc- (yikes), and if that mapping hangs around still while we try to map the gtt as wc the kernel will downgrade our request to uc-, resulting in abyssal performance. Unfortunately we can't do this as early as readon does (i.e. as the first thing we do when initializing the hw) because our fb/mmio space region moves around on a per-gen basis. So I've had to move it below the gtt initialization, but that seems to work, too. The important thing is that we do this before we set up the gtt wc mapping. Now an altogether different question is why people compile their kernels with vesafb enabled, but I guess making things just work isn't bad per se ... v2: - s/radeondrmfb/inteldrmfb/ - fix up error handling v3: Kill #ifdef X86, this is Intel after all. Noticed by Ben Widawsky. v4: Jani Nikula complained about the pointless bool primary initialization. v5: Don't oops if we can't allocate, noticed by Chris Wilson. v6: Resolve conflicts with agp rework and fixup whitespace. This is commit e188719a2891f01b3100d in drm-next. Backport to 3.5 -fixes queue requested by Dave Airlie - due to grub using vesa on fedora their initrd seems to load vesafb before loading the real kms driver. So tons more people actually experience a dead-slow gpu. Hence also the Cc: stable. Reported-and-tested-by: "Kilarski, Bernard R" Reviewed-by: Chris Wilson Signed-off-by: Daniel Vetter Signed-off-by: Dave Airlie [lizf: Backported to 3.4: adjust context] Signed-off-by: Li Zefan Signed-off-by: Greg Kroah-Hartman commit 4e0bc3f3cd9028d4a2bc2f21c6d371b2f671823b Author: Tao Ma Date: Mon May 28 18:20:59 2012 -0400 ext4: protect group inode free counting with group lock commit 6f2e9f0e7d795214b9cf5a47724a273b705fd113 upstream. Now when we set the group inode free count, we don't have a proper group lock so that multiple threads may decrease the inode free count at the same time. And e2fsck will complain something like: Free inodes count wrong for group #1 (1, counted=0). Fix? no Free inodes count wrong for group #2 (3, counted=0). Fix? no Directories count wrong for group #2 (780, counted=779). Fix? no Free inodes count wrong for group #3 (2272, counted=2273). Fix? no So this patch try to protect it with the ext4_lock_group. btw, it is found by xfstests test case 269 and the volume is mkfsed with the parameter "-O ^resize_inode,^uninit_bg,extent,meta_bg,flex_bg,ext_attr" and I have run it 100 times and the error in e2fsck doesn't show up again. Signed-off-by: Tao Ma Signed-off-by: "Theodore Ts'o" Signed-off-by: Benjamin LaHaise Signed-off-by: Greg Kroah-Hartman commit 5e23efd0c1d6c67761f859c141ba67bac80b81e0 Author: Paul E. McKenney Date: Mon Oct 15 21:35:59 2012 -0700 printk: Fix scheduling-while-atomic problem in console_cpu_notify() commit 85eae82a0855d49852b87deac8653e4ebc8b291f upstream. The console_cpu_notify() function runs with interrupts disabled in the CPU_DYING case. It therefore cannot block, for example, as will happen when it calls console_lock(). Therefore, remove the CPU_DYING leg of the switch statement to avoid this problem. Signed-off-by: Paul E. McKenney Reviewed-by: Srivatsa S. Bhat Signed-off-by: Linus Torvalds Cc: Guillaume Morin Signed-off-by: Greg Kroah-Hartman commit 36f0c45db55e2e840deefc286a33c2c7aef2f18e Author: Peter Oberparleiter Date: Thu Feb 6 15:58:20 2014 +0100 x86, hweight: Fix BUG when booting with CONFIG_GCOV_PROFILE_ALL=y commit 6583327c4dd55acbbf2a6f25e775b28b3abf9a42 upstream. Commit d61931d89b, "x86: Add optimized popcnt variants" introduced compile flag -fcall-saved-rdi for lib/hweight.c. When combined with options -fprofile-arcs and -O2, this flag causes gcc to generate broken constructor code. As a result, a 64 bit x86 kernel compiled with CONFIG_GCOV_PROFILE_ALL=y prints message "gcov: could not create file" and runs into sproadic BUGs during boot. The gcc people indicate that these kinds of problems are endemic when using ad hoc calling conventions. It is therefore best to treat any file compiled with ad hoc calling conventions as an isolated environment and avoid things like profiling or coverage analysis, since those subsystems assume a "normal" calling conventions. This patch avoids the bug by excluding lib/hweight.o from coverage profiling. Reported-by: Meelis Roos Cc: Andrew Morton Signed-off-by: Peter Oberparleiter Link: http://lkml.kernel.org/r/52F3A30C.7050205@linux.vnet.ibm.com Signed-off-by: H. Peter Anvin Signed-off-by: Greg Kroah-Hartman commit d89985cb8cf57e6e444ee4ddf48cec39d0f28b1c Author: KOSAKI Motohiro Date: Thu Feb 6 12:04:28 2014 -0800 mm: __set_page_dirty uses spin_lock_irqsave instead of spin_lock_irq commit 227d53b397a32a7614667b3ecaf1d89902fb6c12 upstream. To use spin_{un}lock_irq is dangerous if caller disabled interrupt. During aio buffer migration, we have a possibility to see the following call stack. aio_migratepage [disable interrupt] migrate_page_copy clear_page_dirty_for_io set_page_dirty __set_page_dirty_buffers __set_page_dirty spin_lock_irq This mean, current aio migration is a deadlockable. spin_lock_irqsave is a safer alternative and we should use it. Signed-off-by: KOSAKI Motohiro Reported-by: David Rientjes rientjes@google.com> Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit 4d4bed8141f7883dfbed44749653f769367b3e01 Author: KOSAKI Motohiro Date: Thu Feb 6 12:04:24 2014 -0800 mm: __set_page_dirty_nobuffers() uses spin_lock_irqsave() instead of spin_lock_irq() commit a85d9df1ea1d23682a0ed1e100e6965006595d06 upstream. During aio stress test, we observed the following lockdep warning. This mean AIO+numa_balancing is currently deadlockable. The problem is, aio_migratepage disable interrupt, but __set_page_dirty_nobuffers unintentionally enable it again. Generally, all helper function should use spin_lock_irqsave() instead of spin_lock_irq() because they don't know caller at all. other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&ctx->completion_lock)->rlock); lock(&(&ctx->completion_lock)->rlock); *** DEADLOCK *** dump_stack+0x19/0x1b print_usage_bug+0x1f7/0x208 mark_lock+0x21d/0x2a0 mark_held_locks+0xb9/0x140 trace_hardirqs_on_caller+0x105/0x1d0 trace_hardirqs_on+0xd/0x10 _raw_spin_unlock_irq+0x2c/0x50 __set_page_dirty_nobuffers+0x8c/0xf0 migrate_page_copy+0x434/0x540 aio_migratepage+0xb1/0x140 move_to_new_page+0x7d/0x230 migrate_pages+0x5e5/0x700 migrate_misplaced_page+0xbc/0xf0 do_numa_page+0x102/0x190 handle_pte_fault+0x241/0x970 handle_mm_fault+0x265/0x370 __do_page_fault+0x172/0x5a0 do_page_fault+0x1a/0x70 page_fault+0x28/0x30 Signed-off-by: KOSAKI Motohiro Cc: Larry Woodman Cc: Rik van Riel Cc: Johannes Weiner Acked-by: David Rientjes Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit a0f916d429bcb240f8048c0d5f61d07c6d0c73ae Author: Stephen Smalley Date: Thu Jan 30 11:26:59 2014 -0500 SELinux: Fix kernel BUG on empty security contexts. commit 2172fa709ab32ca60e86179dc67d0857be8e2c98 upstream. Setting an empty security context (length=0) on a file will lead to incorrectly dereferencing the type and other fields of the security context structure, yielding a kernel BUG. As a zero-length security context is never valid, just reject all such security contexts whether coming from userspace via setxattr or coming from the filesystem upon a getxattr request by SELinux. Setting a security context value (empty or otherwise) unknown to SELinux in the first place is only possible for a root process (CAP_MAC_ADMIN), and, if running SELinux in enforcing mode, only if the corresponding SELinux mac_admin permission is also granted to the domain by policy. In Fedora policies, this is only allowed for specific domains such as livecd for setting down security contexts that are not defined in the build host policy. Reproducer: su setenforce 0 touch foo setfattr -n security.selinux foo Caveat: Relabeling or removing foo after doing the above may not be possible without booting with SELinux disabled. Any subsequent access to foo after doing the above will also trigger the BUG. BUG output from Matthew Thode: [ 473.893141] ------------[ cut here ]------------ [ 473.962110] kernel BUG at security/selinux/ss/services.c:654! [ 473.995314] invalid opcode: 0000 [#6] SMP [ 474.027196] Modules linked in: [ 474.058118] CPU: 0 PID: 8138 Comm: ls Tainted: G D I 3.13.0-grsec #1 [ 474.116637] Hardware name: Supermicro X8ST3/X8ST3, BIOS 2.0 07/29/10 [ 474.149768] task: ffff8805f50cd010 ti: ffff8805f50cd488 task.ti: ffff8805f50cd488 [ 474.183707] RIP: 0010:[] [] context_struct_compute_av+0xce/0x308 [ 474.219954] RSP: 0018:ffff8805c0ac3c38 EFLAGS: 00010246 [ 474.252253] RAX: 0000000000000000 RBX: ffff8805c0ac3d94 RCX: 0000000000000100 [ 474.287018] RDX: ffff8805e8aac000 RSI: 00000000ffffffff RDI: ffff8805e8aaa000 [ 474.321199] RBP: ffff8805c0ac3cb8 R08: 0000000000000010 R09: 0000000000000006 [ 474.357446] R10: 0000000000000000 R11: ffff8805c567a000 R12: 0000000000000006 [ 474.419191] R13: ffff8805c2b74e88 R14: 00000000000001da R15: 0000000000000000 [ 474.453816] FS: 00007f2e75220800(0000) GS:ffff88061fc00000(0000) knlGS:0000000000000000 [ 474.489254] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 474.522215] CR2: 00007f2e74716090 CR3: 00000005c085e000 CR4: 00000000000207f0 [ 474.556058] Stack: [ 474.584325] ffff8805c0ac3c98 ffffffff811b549b ffff8805c0ac3c98 ffff8805f1190a40 [ 474.618913] ffff8805a6202f08 ffff8805c2b74e88 00068800d0464990 ffff8805e8aac860 [ 474.653955] ffff8805c0ac3cb8 000700068113833a ffff880606c75060 ffff8805c0ac3d94 [ 474.690461] Call Trace: [ 474.723779] [] ? lookup_fast+0x1cd/0x22a [ 474.778049] [] security_compute_av+0xf4/0x20b [ 474.811398] [] avc_compute_av+0x2a/0x179 [ 474.843813] [] avc_has_perm+0x45/0xf4 [ 474.875694] [] inode_has_perm+0x2a/0x31 [ 474.907370] [] selinux_inode_getattr+0x3c/0x3e [ 474.938726] [] security_inode_getattr+0x1b/0x22 [ 474.970036] [] vfs_getattr+0x19/0x2d [ 475.000618] [] vfs_fstatat+0x54/0x91 [ 475.030402] [] vfs_lstat+0x19/0x1b [ 475.061097] [] SyS_newlstat+0x15/0x30 [ 475.094595] [] ? __audit_syscall_entry+0xa1/0xc3 [ 475.148405] [] system_call_fastpath+0x16/0x1b [ 475.179201] Code: 00 48 85 c0 48 89 45 b8 75 02 0f 0b 48 8b 45 a0 48 8b 3d 45 d0 b6 00 8b 40 08 89 c6 ff ce e8 d1 b0 06 00 48 85 c0 49 89 c7 75 02 <0f> 0b 48 8b 45 b8 4c 8b 28 eb 1e 49 8d 7d 08 be 80 01 00 00 e8 [ 475.255884] RIP [] context_struct_compute_av+0xce/0x308 [ 475.296120] RSP [ 475.328734] ---[ end trace f076482e9d754adc ]--- Reported-by: Matthew Thode Signed-off-by: Stephen Smalley Signed-off-by: Paul Moore Signed-off-by: Greg Kroah-Hartman