PDF Support + MORE!!!! (#416)

# Added
- Added support for PDFs within Kavita. PDFs will open in the Manga reader and you can read through them as images. PDFs are heavier than archives, so they may take longer to open for reading. (Fixes #187)

# Changed
- Changed: Major change in how Kavita libraries work. Kavita libraries will now allow for mixed media types, that means you can have raw images, archives, epubs, and pdfs all within your Manga library. In the case that the same Series exists between 2 different types of medias, they will be separated and an icon will show to help you identify the types. The correct reader will open regardless of what library you are on. Note: Nightly users need to delete their Raw Images libraries before updating.

# Fixed
- Fixed: Fixed an issue where checking if a file was modified since last scan always returned true, meaning we would do more I/O than was needed (Fixes #415)
- Fixed: There wasn't enough spacing on the top menu bar on the Manga reader
- Fixed: Fixed a bug where user preferences dark mode control always showed true, even if you were not using dark mode

# Dev stuff
- For image extraction, if there is only 1 image we will extract  just that, else we will extract only images
- Refactored all the Parser code out of the ScannerService into a self contained class. The class should be created for any scans, allowing multiple tasks to run without any chance of cross over.



* Fixed indentation for cs files

* Fixed an issue where the logic for if a file had been modified or not was not working and always saying modified, meaning we were doing more file I/O than needed.

* Implemented the ability to have PDF books. No reader functionality.

* Implemented a basic form of scanning for PDF files. Reworked Image based libraries to remove the need to separate in a special library and instead just work within the Manga/Comic library.

* Removed the old library types.

* Removed some extra code around old raw library types

* Fully implemented PDF support into Kavita by using docnet. Removed old libraries we tried that did not work. PDFs take about 200ms to save the file to disk, so they are much slower than reading archives.

* Refactored Libraries so that they can have any file extension and the UI will decide which reader to use.

* Reworked the Series Parsing code.

We now use a separate instance for each task call, so there should be no cross over if 2 tasks are running at the same time.

Second, we now store Format with the Series, so we can have duplicate Series with the same name, but a different type of files underneath.

* Fixed PDF transparency issues

- Used this code to fix an issue when a PDF page doesn't have a background. https://github.com/GowenGit/docnet/issues/8#issuecomment-538985672

- This also fixes the same issue for cover images

* Fixed an issue where if a raw image was in a directory with non-image files, those would get moved to cache when trying to open the file.

* For image extraction, if there is only 1 image, just copy that to cache instead of everything else in the directory that is an image.

* Add some spacing to the top menu bar

* Added an icon to the card to showcase the type of file

* Added a tag badge to the series detail page

* Fixed a bug in user preferences where dark mode control would default to true, even if you weren't on it

* Fixed some tests up

* Some code smells

Co-authored-by: Robbie Davis <robbie@therobbiedavis.com>
This commit is contained in:
Joseph Milazzo 2021-07-22 21:13:24 -05:00 committed by GitHub
parent b8165b311c
commit b0df67cdda
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
43 changed files with 1725 additions and 577 deletions

View file

@ -7,11 +7,13 @@ using System.Linq;
using System.Threading.Tasks;
using API.Data;
using API.Entities;
using API.Entities.Enums;
using API.Interfaces;
using API.Interfaces.Services;
using API.Parser;
using API.Services;
using API.Services.Tasks;
using API.Services.Tasks.Scanner;
using API.Tests.Helpers;
using AutoMapper;
using Microsoft.Data.Sqlite;
@ -47,15 +49,6 @@ namespace API.Tests.Services
_context = new DataContext(contextOptions);
Task.Run(SeedDb).GetAwaiter().GetResult();
//BackgroundJob.Enqueue is what I need to mock or something (it's static...)
// ICacheService cacheService, ILogger<TaskScheduler> logger, IScannerService scannerService,
// IUnitOfWork unitOfWork, IMetadataService metadataService, IBackupService backupService, ICleanupService cleanupService,
// IBackgroundJobClient jobClient
//var taskScheduler = new TaskScheduler(Substitute.For<ICacheService>(), Substitute.For<ILogger<TaskScheduler>>(), Substitute.For<)
// Substitute.For<UserManager<AppUser>>() - Not needed because only for UserService
IUnitOfWork unitOfWork = new UnitOfWork(_context, Substitute.For<IMapper>(), null);
@ -82,66 +75,64 @@ namespace API.Tests.Services
return await _context.SaveChangesAsync() > 0;
}
// [Fact]
// public void Test()
// {
// _scannerService.ScanLibrary(1, false);
//
// var series = _unitOfWork.LibraryRepository.GetLibraryForIdAsync(1).Result.Series;
// }
[Fact]
public void FindSeriesNotOnDisk_Should_RemoveNothing_Test()
{
var infos = new Dictionary<string, List<ParserInfo>>();
var infos = new Dictionary<ParsedSeries, List<ParserInfo>>();
AddToParsedInfo(infos, new ParserInfo() {Series = "Darker than Black"});
AddToParsedInfo(infos, new ParserInfo() {Series = "Cage of Eden", Volumes = "1"});
AddToParsedInfo(infos, new ParserInfo() {Series = "Cage of Eden", Volumes = "10"});
AddToParsedInfo(infos, new ParserInfo() {Series = "Darker than Black", Format = MangaFormat.Archive});
AddToParsedInfo(infos, new ParserInfo() {Series = "Cage of Eden", Volumes = "1", Format = MangaFormat.Archive});
AddToParsedInfo(infos, new ParserInfo() {Series = "Cage of Eden", Volumes = "10", Format = MangaFormat.Archive});
var existingSeries = new List<Series>();
existingSeries.Add(new Series()
var existingSeries = new List<Series>
{
Name = "Cage of Eden",
LocalizedName = "Cage of Eden",
OriginalName = "Cage of Eden",
NormalizedName = API.Parser.Parser.Normalize("Cage of Eden"),
Metadata = new SeriesMetadata()
});
existingSeries.Add(new Series()
{
Name = "Darker Than Black",
LocalizedName = "Darker Than Black",
OriginalName = "Darker Than Black",
NormalizedName = API.Parser.Parser.Normalize("Darker Than Black"),
Metadata = new SeriesMetadata()
});
new Series()
{
Name = "Cage of Eden",
LocalizedName = "Cage of Eden",
OriginalName = "Cage of Eden",
NormalizedName = API.Parser.Parser.Normalize("Cage of Eden"),
Metadata = new SeriesMetadata(),
Format = MangaFormat.Archive
},
new Series()
{
Name = "Darker Than Black",
LocalizedName = "Darker Than Black",
OriginalName = "Darker Than Black",
NormalizedName = API.Parser.Parser.Normalize("Darker Than Black"),
Metadata = new SeriesMetadata(),
Format = MangaFormat.Archive
}
};
Assert.Empty(_scannerService.FindSeriesNotOnDisk(existingSeries, infos));
}
[Theory]
[InlineData(new [] {"Darker than Black"}, "Darker than Black", "Darker than Black")]
[InlineData(new [] {"Darker than Black"}, "Darker Than Black", "Darker than Black")]
[InlineData(new [] {"Darker than Black"}, "Darker Than Black!", "Darker than Black")]
[InlineData(new [] {""}, "Runaway Jack", "Runaway Jack")]
public void MergeNameTest(string[] existingSeriesNames, string parsedInfoName, string expected)
{
var collectedSeries = new ConcurrentDictionary<string, List<ParserInfo>>();
foreach (var seriesName in existingSeriesNames)
{
AddToParsedInfo(collectedSeries, new ParserInfo() {Series = seriesName});
}
var actualName = _scannerService.MergeName(collectedSeries, new ParserInfo()
{
Series = parsedInfoName
});
Assert.Equal(expected, actualName);
}
// TODO: Figure out how to do this with ParseScannedFiles
// [Theory]
// [InlineData(new [] {"Darker than Black"}, "Darker than Black", "Darker than Black")]
// [InlineData(new [] {"Darker than Black"}, "Darker Than Black", "Darker than Black")]
// [InlineData(new [] {"Darker than Black"}, "Darker Than Black!", "Darker than Black")]
// [InlineData(new [] {""}, "Runaway Jack", "Runaway Jack")]
// public void MergeNameTest(string[] existingSeriesNames, string parsedInfoName, string expected)
// {
// var collectedSeries = new ConcurrentDictionary<ParsedSeries, List<ParserInfo>>();
// foreach (var seriesName in existingSeriesNames)
// {
// AddToParsedInfo(collectedSeries, new ParserInfo() {Series = seriesName, Format = MangaFormat.Archive});
// }
//
// var actualName = new ParseScannedFiles(_bookService, _logger).MergeName(collectedSeries, new ParserInfo()
// {
// Series = parsedInfoName,
// Format = MangaFormat.Archive
// });
//
// Assert.Equal(expected, actualName);
// }
[Fact]
public void RemoveMissingSeries_Should_RemoveSeries()
@ -162,11 +153,19 @@ namespace API.Tests.Services
Assert.Equal(missingSeries.Count, removeCount);
}
private void AddToParsedInfo(IDictionary<string, List<ParserInfo>> collectedSeries, ParserInfo info)
private void AddToParsedInfo(IDictionary<ParsedSeries, List<ParserInfo>> collectedSeries, ParserInfo info)
{
var existingKey = collectedSeries.Keys.FirstOrDefault(ps =>
ps.Format == info.Format && ps.NormalizedName == API.Parser.Parser.Normalize(info.Series));
existingKey ??= new ParsedSeries()
{
Format = info.Format,
Name = info.Series,
NormalizedName = API.Parser.Parser.Normalize(info.Series)
};
if (collectedSeries.GetType() == typeof(ConcurrentDictionary<,>))
{
((ConcurrentDictionary<string, List<ParserInfo>>) collectedSeries).AddOrUpdate(info.Series, new List<ParserInfo>() {info}, (_, oldValue) =>
((ConcurrentDictionary<ParsedSeries, List<ParserInfo>>) collectedSeries).AddOrUpdate(existingKey, new List<ParserInfo>() {info}, (_, oldValue) =>
{
oldValue ??= new List<ParserInfo>();
if (!oldValue.Contains(info))
@ -179,84 +178,25 @@ namespace API.Tests.Services
}
else
{
if (!collectedSeries.ContainsKey(info.Series))
if (!collectedSeries.ContainsKey(existingKey))
{
collectedSeries.Add(info.Series, new List<ParserInfo>() {info});
collectedSeries.Add(existingKey, new List<ParserInfo>() {info});
}
else
{
var list = collectedSeries[info.Series];
var list = collectedSeries[existingKey];
if (!list.Contains(info))
{
list.Add(info);
}
collectedSeries[info.Series] = list;
collectedSeries[existingKey] = list;
}
}
}
// [Fact]
// public void ExistingOrDefault_Should_BeFromLibrary()
// {
// var allSeries = new List<Series>()
// {
// new Series() {Id = 2, Name = "Darker Than Black"},
// new Series() {Id = 3, Name = "Darker Than Black - Some Extension"},
// new Series() {Id = 4, Name = "Akame Ga Kill"},
// };
// Assert.Equal(_libraryMock.Series.ElementAt(0).Id, ScannerService.ExistingOrDefault(_libraryMock, allSeries, "Darker Than Black").Id);
// Assert.Equal(_libraryMock.Series.ElementAt(0).Id, ScannerService.ExistingOrDefault(_libraryMock, allSeries, "Darker than Black").Id);
// }
//
// [Fact]
// public void ExistingOrDefault_Should_BeFromAllSeries()
// {
// var allSeries = new List<Series>()
// {
// new Series() {Id = 2, Name = "Darker Than Black"},
// new Series() {Id = 3, Name = "Darker Than Black - Some Extension"},
// new Series() {Id = 4, Name = "Akame Ga Kill"},
// };
// Assert.Equal(3, ScannerService.ExistingOrDefault(_libraryMock, allSeries, "Darker Than Black - Some Extension").Id);
// }
//
// [Fact]
// public void ExistingOrDefault_Should_BeNull()
// {
// var allSeries = new List<Series>()
// {
// new Series() {Id = 2, Name = "Darker Than Black"},
// new Series() {Id = 3, Name = "Darker Than Black - Some Extension"},
// new Series() {Id = 4, Name = "Akame Ga Kill"},
// };
// Assert.Null(ScannerService.ExistingOrDefault(_libraryMock, allSeries, "Non existing series"));
// }
[Fact]
public void Should_CreateSeries_Test()
{
// var allSeries = new List<Series>();
// var parsedSeries = new Dictionary<string, List<ParserInfo>>();
//
// parsedSeries.Add("Darker Than Black", new List<ParserInfo>()
// {
// new ParserInfo() {Chapters = "0", Filename = "Something.cbz", Format = MangaFormat.Archive, FullFilePath = "E:/Manga/Something.cbz", Series = "Darker Than Black", Volumes = "1"},
// new ParserInfo() {Chapters = "0", Filename = "Something.cbz", Format = MangaFormat.Archive, FullFilePath = "E:/Manga/Something.cbz", Series = "Darker than Black", Volumes = "2"}
// });
//
// _scannerService.UpsertSeries(_libraryMock, parsedSeries, allSeries);
//
// Assert.Equal(1, _libraryMock.Series.Count);
// Assert.Equal(2, _libraryMock.Series.ElementAt(0).Volumes.Count);
// _testOutputHelper.WriteLine(_libraryMock.ToString());
Assert.True(true);
}
private static DbConnection CreateInMemoryDatabase()
{
var connection = new SqliteConnection("Filename=:memory:");