Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.10.0 release #16

Merged
merged 22 commits into from
Jun 30, 2020
Merged
Show file tree
Hide file tree
Changes from 19 commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
2a827fa
initial stub implementation of selection features
Jun 4, 2020
5f56c43
implementation of bulk deletion, tagging of files
Jun 8, 2020
6215e06
hide non-implemented options
Jun 8, 2020
2e0a1c9
Fixed incorrect selector on imagesLoaded in the Space view
bodom0015 Jun 9, 2020
e7d907e
return thumbnail (fixes #8)
robkooper Jun 16, 2020
ec95e74
forgot to document thumbnail
robkooper Jun 17, 2020
5941200
Merge pull request #9 from robkooper/return-thumbnail
lmarini Jun 17, 2020
fb1f265
download selected files from within a dataset
Jun 23, 2020
1eebfe2
add folder download support
Jun 23, 2020
4c35843
Improve comments and minor bugfix
Jun 26, 2020
e10c7cb
downloadfolder comment
Jun 26, 2020
a772e91
Merge branch 'develop' into multiselect-files-in-dataset
Jun 26, 2020
a1e7cd0
changelog
Jun 26, 2020
56e7443
Merge pull request #10 from clowder-framework/multiselect-files-in-da…
lmarini Jun 26, 2020
abd93b3
Merge branch 'develop' into wrap-empty-collectionsbyspace-in-new-row
lmarini Jun 29, 2020
f0d3ef2
Merge pull request #11 from clowder-framework/wrap-empty-collectionsb…
lmarini Jun 29, 2020
686ccaa
Revert invalid masonry selector to fix vertical stacking/overlap issu…
bodom0015 Jun 29, 2020
dbec5d0
Merge pull request #15 from clowder-framework/fix-another-invalid-sel…
lmarini Jun 29, 2020
591bc0e
Updated version number to v1.10.0.
lmarini Jun 29, 2020
24c2c4b
exit with code 0
robkooper Jun 30, 2020
24d832a
Update CHANGELOG.md
robkooper Jun 30, 2020
0792f2c
Merge pull request #17 from robkooper/no-fail-user-exist
lmarini Jun 30, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,16 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/)
and this project adheres to [Semantic Versioning](http://semver.org/).

## 1.10.0 - 2020-06-30

### Added
- Ability to mark multiple files in a dataset and perform bulk operations (download, tag, delete) on them at once.

### Fixed
- Return thumbnail as part of the file information.
[#8](https://github.com/clowder-framework/clowder/issues/8)
- Datasets layout on space page would sometimes have overlapping tiles.

## 1.9.0 - 2020-06-01
**_Warning:_ This update modifies information stored in Elasticsearch used for text based searching. To take advantage
of these changes a reindex of Elasticsearch is required. A reindex can be started by an admin from the Admin menu.**
Expand Down
83 changes: 78 additions & 5 deletions app/api/Datasets.scala
Original file line number Diff line number Diff line change
Expand Up @@ -2065,10 +2065,16 @@ class Datasets @Inject()(
* @param dataset dataset from which to get teh files
* @param chunkSize chunk size in memory in which to buffer the stream
* @param compression java built in compression value. Use 0 for no compression.
* @param bagit whether or not to include bagit structures in zip
* @param user an optional user to include in metadata
* @param fileIDs a list of UUIDs of files in the dataset to include (i.e. marked file downloads)
* @param folderId a folder UUID in the dataset to include (i.e. folder download)
* @return Enumerator to produce array of bytes from a zipped stream containing the bytes of each file
* in the dataset
*/
def enumeratorFromDataset(dataset: Dataset, chunkSize: Int = 1024 * 8, compression: Int = Deflater.DEFAULT_COMPRESSION, bagit: Boolean, user : Option[User])
def enumeratorFromDataset(dataset: Dataset, chunkSize: Int = 1024 * 8,
compression: Int = Deflater.DEFAULT_COMPRESSION, bagit: Boolean,
user : Option[User], fileIDs: Option[List[UUID]], folderId: Option[UUID])
(implicit ec: ExecutionContext): Enumerator[Array[Byte]] = {
implicit val pec = ec.prepare()
val dataFolder = if (bagit) "data/" else ""
Expand All @@ -2077,7 +2083,19 @@ class Datasets @Inject()(

// compute list of all files and folder in dataset. This will also make sure
// that all files and folder names are unique.
listFilesInFolder(dataset.files, dataset.folders, dataFolder, filenameMap, inputFiles)
fileIDs match {
case Some(fids) => {
Logger.info("Downloading only some files")
Logger.info(fids.toString)
listFilesInFolder(fids, List.empty, dataFolder, filenameMap, inputFiles)
}
case None => {
folderId match {
case Some(fid) => listFilesInFolder(List.empty, List(fid), dataFolder, filenameMap, inputFiles)
case None => listFilesInFolder(dataset.files, dataset.folders, dataFolder, filenameMap, inputFiles)
}
}
}

val md5Files = scala.collection.mutable.HashMap.empty[String, MessageDigest] //for the files
val md5Bag = scala.collection.mutable.HashMap.empty[String, MessageDigest] //for the bag files
Expand Down Expand Up @@ -2121,14 +2139,13 @@ class Datasets @Inject()(
* the enumerator is finished
*/

var is: Option[InputStream] = addDatasetInfoToZip(dataFolder,dataset,zip)
var is: Option[InputStream] = addDatasetInfoToZip(dataFolder, dataset, zip)
//digest input stream
val md5 = MessageDigest.getInstance("MD5")
md5Files.put(dataFolder+"_info.json",md5)
is = Some(new DigestInputStream(is.get,md5))
file_type = 1 //next is metadata


Enumerator.generateM({
is match {
case Some(inputStream) => {
Expand Down Expand Up @@ -2415,7 +2432,7 @@ class Datasets @Inject()(

// Use custom enumerator to create the zip file on the fly
// Use a 1MB in memory byte array
Ok.chunked(enumeratorFromDataset(dataset,1024*1024, compression,bagit,user)).withHeaders(
Ok.chunked(enumeratorFromDataset(dataset,1024*1024, compression, bagit, user, None, None)).withHeaders(
CONTENT_TYPE -> "application/zip",
CONTENT_DISPOSITION -> (FileUtils.encodeAttachment(dataset.name+ ".zip", request.headers.get("user-agent").getOrElse("")))
)
Expand All @@ -2427,6 +2444,62 @@ class Datasets @Inject()(
}
}

// Takes dataset ID and a comma-separated string of file UUIDs in the dataset and streams just those files as a zip
def downloadPartial(id: UUID, fileList: String) = PermissionAction(Permission.DownloadFiles, Some(ResourceRef(ResourceRef.dataset, id))) { implicit request =>
implicit val user = request.user
datasets.get(id) match {
case Some(dataset) => {
val fileIDs = fileList.split(',').map(fid => new UUID(fid)).toList
val bagit = play.api.Play.configuration.getBoolean("downloadDatasetBagit").getOrElse(true)

// Increment download count for each file
fileIDs.foreach(fid => files.incrementDownloads(fid, user))

// Use custom enumerator to create the zip file on the fly
// Use a 1MB in memory byte array
Ok.chunked(enumeratorFromDataset(dataset,1024*1024, -1, bagit, user, Some(fileIDs), None)).withHeaders(
CONTENT_TYPE -> "application/zip",
CONTENT_DISPOSITION -> (FileUtils.encodeAttachment(dataset.name+ " (Partial).zip", request.headers.get("user-agent").getOrElse("")))
)
}
// If the dataset wasn't found by ID
case None => {
NotFound
}
}
}

// Takes dataset ID and a folder ID in that dataset and streams just that folder and sub-folders as a zip
def downloadFolder(id: UUID, folderId: UUID) = PermissionAction(Permission.DownloadFiles, Some(ResourceRef(ResourceRef.dataset, id))) { implicit request =>
implicit val user = request.user
datasets.get(id) match {
case Some(dataset) => {
val bagit = play.api.Play.configuration.getBoolean("downloadDatasetBagit").getOrElse(true)

// Increment download count for each file in folder
folders.get(folderId) match {
case Some(fo) => {
fo.files.foreach(fid => files.incrementDownloads(fid, user))

// Use custom enumerator to create the zip file on the fly
// Use a 1MB in memory byte array
Ok.chunked(enumeratorFromDataset(dataset,1024*1024, -1, bagit, user, None, Some(folderId))).withHeaders(
CONTENT_TYPE -> "application/zip",
CONTENT_DISPOSITION -> (FileUtils.encodeAttachment(dataset.name+ " ("+fo.name+" Folder).zip", request.headers.get("user-agent").getOrElse("")))
)
}
case None => NotFound
}


}
// If the dataset wasn't found by ID
case None => {
NotFound
}
}
}

def updateAccess(id:UUID, access:String) = PermissionAction(Permission.PublicDataset, Some(ResourceRef(ResourceRef.dataset, id))) { implicit request =>
implicit val user = request.user
user match {
Expand Down
27 changes: 7 additions & 20 deletions app/api/Files.scala
Original file line number Diff line number Diff line change
Expand Up @@ -728,42 +728,29 @@ class Files @Inject()(
"content-type" -> file.contentType,
"date-created" -> file.uploadDate.toString(),
"size" -> file.length.toString,
"thumbnail" -> file.thumbnail_id.orNull,
"authorId" -> file.author.id.stringify,
"status" -> file.status)

// Only include filepath if using DiskByte storage and user is serverAdmin
val jsonMap = file.loader match {
case "services.filesystem.DiskByteStorageService" => {
if (serverAdmin)
Map(
"id" -> file.id.toString,
"filename" -> file.filename,
"filepath" -> file.loader_id,
"filedescription" -> file.description,
"content-type" -> file.contentType,
"date-created" -> file.uploadDate.toString(),
"size" -> file.length.toString,
"authorId" -> file.author.id.stringify,
"status" -> file.status)
defaultMap ++ Map(
"filepath" -> file.loader_id
)
else
defaultMap
}
case "services.s3.S3ByteStorageService" => {
if (serverAdmin) {
val bucketName = configuration.getString(S3ByteStorageService.BucketName).getOrElse("")
val serviceEndpoint = configuration.getString(S3ByteStorageService.ServiceEndpoint).getOrElse("")
Map(
"id" -> file.id.toString,
"filename" -> file.filename,
defaultMap ++ Map(
"service-endpoint" -> serviceEndpoint,
"bucket-name" -> bucketName,
"object-key" -> file.loader_id,
"filedescription" -> file.description,
"content-type" -> file.contentType,
"date-created" -> file.uploadDate.toString(),
"size" -> file.length.toString,
"authorId" -> file.author.id.stringify,
"status" -> file.status)
"object-key" -> file.loader_id
)
} else
defaultMap
}
Expand Down
2 changes: 2 additions & 0 deletions app/controllers/Application.scala
Original file line number Diff line number Diff line change
Expand Up @@ -333,6 +333,8 @@ class Application @Inject() (files: FileService, collections: CollectionService,
api.routes.javascript.Datasets.unfollow,
api.routes.javascript.Datasets.detachFile,
api.routes.javascript.Datasets.download,
api.routes.javascript.Datasets.downloadPartial,
api.routes.javascript.Datasets.downloadFolder,
api.routes.javascript.Datasets.getPreviews,
api.routes.javascript.Datasets.updateAccess,
api.routes.javascript.Datasets.addFileEvent,
Expand Down
Loading