Btrfs: fix hang on compressed write error
authorFilipe Manana <fdmanana@suse.com>
Mon, 6 Oct 2014 21:14:23 +0000 (22:14 +0100)
committerChris Mason <clm@fb.com>
Fri, 21 Nov 2014 01:14:25 +0000 (17:14 -0800)
In inode.c:submit_compressed_extents(), before calling btrfs_submit_compressed_write()
we start writeback for all pages, clear their dirty flag, unlock them, etc, but if
btrfs_submit_compressed_write() fails (at the moment it can only fail with -ENOMEM),
we never end the writeback on the pages, so any filemap_fdatawait_range() call will
hang forever. We were also not calling the writepage end io hook, which means the
corresponding ordered extent will never complete and all its waiters will block
forever, such as a full fsync (via btrfs_wait_ordered_range()).

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
fs/btrfs/inode.c

index fcb9a38..ec68eae 100644 (file)
@@ -814,6 +814,20 @@ retry:
                                    ins.objectid,
                                    ins.offset, async_extent->pages,
                                    async_extent->nr_pages);
+               if (ret) {
+                       struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
+                       struct page *p = async_extent->pages[0];
+                       const u64 start = async_extent->start;
+                       const u64 end = start + async_extent->ram_size - 1;
+
+                       p->mapping = inode->i_mapping;
+                       tree->ops->writepage_end_io_hook(p, start, end,
+                                                        NULL, 0);
+                       p->mapping = NULL;
+                       extent_clear_unlock_delalloc(inode, start, end, NULL, 0,
+                                                    PAGE_END_WRITEBACK |
+                                                    PAGE_SET_ERROR);
+               }
                alloc_hint = ins.objectid + ins.offset;
                kfree(async_extent);
                if (ret)