You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
UNC has indicated that our total storage for Emory datasets is currently ~70GB.
We want to identify a secure (non-public) location for these datasets after we export them.
We anticipate the export package may be constructed as zip files (one per dataset), and the initial use case would be to ensure service managers can access the location and download these as easily as possible.
The location should be accessible to members of the Scrum team as well as the Research Data Program Manager.
Options to discuss:
AWS: S3 [preferred option]
AWS: EFS
AWS: Glacier
OneDrive or Sharepoint
To do: determine which account to put the bucket in
Create a new S3 bucket in our DLP account ("emory_dataverse_backup")
Add intelligent tiering
Add Jen to the AWS account
The text was updated successfully, but these errors were encountered:
I have configured s3 bucket emory-dataverse-backup (underscores are forbidden characters in bucket names) and configured intelligent tiering. Files not accessed for 90 days will be moved to archive, and files not accessed for 180 days will be moved to glacier.
I need @rotated8 to add Jen to the AWS account for this ticket to be completed.
UNC has indicated that our total storage for Emory datasets is currently ~70GB.
We want to identify a secure (non-public) location for these datasets after we export them.
We anticipate the export package may be constructed as zip files (one per dataset), and the initial use case would be to ensure service managers can access the location and download these as easily as possible.
The location should be accessible to members of the Scrum team as well as the Research Data Program Manager.
Options to discuss:
To do: determine which account to put the bucket in
The text was updated successfully, but these errors were encountered: