-
Notifications
You must be signed in to change notification settings - Fork 6
/
Copy pathsystem.tex
1488 lines (1321 loc) · 85.1 KB
/
system.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\section{From Science Drivers to Reference Design}
\label{Sec:refdesign}
The most important characteristic that determines the speed at which a system can
survey a given sky area to a given flux limit (i.e., its depth) is its \'etendue
(or grasp), the product of its primary mirror area and the angular
area of its field of view (for a given set of observing conditions, such as
seeing and sky brightness).
The effective \'etendue for LSST will be greater than 300 m$^2$ deg$^2$, which
is more than an order of magnitude larger than that of any existing facility.
For example, the SDSS, with its 2.5\,m telescope \citep{2006AJ....131.2332G} and a
camera with 30 imaging CCDs \citep{1998AJ....116.3040G}, has an effective \'etendue of
only 5.9 m$^2$ deg$^2$.
The range of scientific investigations that will be enabled by such a
dramatic improvement in survey capability is extremely broad. Guided by
the community-wide input assembled in the report of the Science Working Group of the
LSST in 2004 \citep{Document-26952}, the LSST is designed to
achieve goals set by four main science themes:
\begin{enumerate}
\item Probing dark energy and dark matter.
\item Taking an inventory of the solar system.
\item Exploring the transient optical sky.
\item Mapping the Milky Way.
\end{enumerate}
Each of these four themes itself encompasses a variety of analyses, with
varying sensitivity to instrumental and system parameters. These themes
fully exercise the technical capabilities of the system, such as photometric
and astrometric accuracy and image quality. About 90\% of the observing time
will be devoted to a deep-wide-fast (main) survey mode. The working paradigm is that all
scientific investigations will utilize a common database constructed from an optimized
observing program (the main survey mode), such as that discussed in
\S~\ref{sec:baseline}.
Here we briefly describe these science goals and the most challenging requirements for the
telescope and instrument that are derived from those goals, which will
inform the overall system design decisions discussed below.
For a more detailed discussion, we refer the reader to the LSST Science Requirements
Document \citep[SRD;][]{LPM-17}, the LSST Science Book
\citep[][hereafter SciBook]{2009arXiv0912.0201L},
and links to technical papers and presentations at
\url{https://www.lsst.org/scientists}.
\subsection{The Main Science Drivers}
The main science drivers are used to optimize various system parameters.
Ultimately, in this high-dimensional parameter space, there is a
manifold defined by the total project cost. The science
drivers must both justify this cost and provide guidance
on how to optimize various parameters while staying within the cost envelope.
Here we summarize the dozen or so most important interlocking constraints on data
and system properties placed by the four main science themes:
\begin{enumerate}
\item The depth of a single visit to a given field.
\item Image quality.
\item Photometric accuracy.
\item Astrometric accuracy.
\item Optimal exposure time.
\item The filter complement.
\item The distribution of revisit times (i.e., the cadence of observations),
including the survey lifetime.
\item The total number of visits to a given area of sky.
\item The co-added survey depth.
\item The distribution of visits on the sky, and the total sky coverage.
\item The distribution of visits per filter.
\item Parameters characterizing data processing and data access
(such as the maximum time allowed after each exposure to report
transient sources, and the maximum allowed software
contribution to measurement errors).
\end{enumerate}
We present a detailed discussion of how these science-driven data properties are
transformed to system parameters below.
\subsubsection{Probing Dark Energy and Dark Matter}
\label{sec:Dark_Energy}
Current models of cosmology require the existence of both dark matter and dark
energy to match observational constraints
\citep{2007ApJ...659...98R,2009ApJS..180..330K,2010MNRAS.401.2148P,2012arXiv1211.0310L,2015PNAS..11212249W}, and
references therein). Dark energy affects the cosmic history of both the Hubble expansion
and mass clustering. Distinguishing competing models for the physical
nature of dark energy, or alternative explanations involving
modifications of the general theory of relativity, will require
percent-level measurements of both the cosmic expansion and the growth
of dark matter structure as a function of redshift. Any given
cosmological probe is sensitive to, and thus constrains degenerate
combinations of, several cosmological and astrophysical/systematic parameters. Therefore, the most robust
cosmological constraints are the result of using interlocking combinations
of probes. The most powerful probes include weak gravitational lens cosmic shear (weak lensing [WL]), galaxy clustering and baryon
acoustic oscillations (large-scale structure [LSS]), the mass function and clustering of clusters of galaxies,
time delays in lensed quasar and supernova (SN) systems,
and photometry of Type Ia SNe --- all as functions of
redshift. Using the cosmic microwave background (CMB) fluctuations as the normalization, the
combination of these probes can yield the needed precision to distinguish among models of dark
energy \citep[see, e.g.,][and references therein]{2006JCAP...08..008Z}. The challenge is to turn this available precision into accuracy, by careful modeling and marginalization over a variety of systematic effects \citep[see, e.g.,][]{2017MNRAS.470.2100K}.
Meanwhile, there are a number of astrophysical probes of the fundamental
properties of dark matter worth exploring, including, for example,
weak- and strong-lensing observations of the mass distribution in
galaxies and isolated and
merging clusters, in conjunction with dynamical and
X-ray observations \citep[see, e.g.,][]{2012ApJ...747L..42D,
2013ApJ...765...24N, 2013MNRAS.430...81R}, the numbers and gamma-ray
emission from dwarf satellite galaxies (see, e.g., \citealt{2014ApJ...795L..13H};
\citealt{2015ApJ...809L...4D}), the subtle perturbations of stellar
streams in the Milky Way halo by dark matter substructure
\citep{2016MNRAS.456..602B}, and massive compact halo object
microlensing \citep{2001ApJ...550L.169A}.
Three of the primary dark energy probes, WL, LSS, and SN, provide unique and
independent constraints on the LSST system design (SciBook, Chaps.~11--15).
WL techniques can be used to map the distribution of
mass as a function of redshift and thereby trace the history of both
the expansion of the universe and the growth of structure (e.g., \citealt{1999ApJ...514L..65H};
for recent reviews see \citealt{2015RPPh...78h6901K}; \citealt{2018ARA&A..56..393M}). Measurements of cosmic shear as a function of
redshift allow determination of angular distances versus cosmic time,
providing multiple independent constraints on the nature of dark
energy. These investigations require deep wide-area multicolor
imaging with stringent requirements on shear systematics in at least
two bands, and excellent photometry in at least five bands to measure
photometric redshifts (a requirement shared with LSS, and indeed all
extragalactic science drivers). The strongest constraints on the LSST
image quality arise from this science program. In order to control
systematic errors in shear measurement, the desired depth must be
achieved with many short exposures (allowing for systematics in the
measurement of galaxy shapes related
to the point-spread functions (PSFs) and telescope pointing to be diagnosed and removed). Detailed simulations of
WL techniques show that imaging over $\sim20,000$ deg$^2$ to
a 5$\sigma$ point-source depth of $r_\mathrm{AB} \sim 27.5$ gives adequate
signal to measure shapes for of order 2 billion galaxies for WL.
These numbers are adequate to reach
Stage IV goals for dark energy, as defined by the Dark Energy Task
Force \citep{2006astro.ph..9591A}.
This
depth, as well as the corresponding deep surface brightness limit,
optimizes the number of galaxies with measured shapes in ground-based
seeing and allows their detection in significant numbers to beyond a
redshift of two. Analyzing these data will
require sophisticated data processing techniques. For example, rather
than simply co-adding all images in a given region of sky, the
individual exposures, each with their own PSF and noise
characteristics, should be analyzed simultaneously to optimally
measure the shapes of galaxies \citep{2008ASPC..394..107T,2011PASP..123..596J}.
SNe Ia provided the first robust evidence that the expansion of the
universe is accelerating \citep{1998AJ....116.1009R,1999ApJ...517..565P}. To fully
exploit the SN science potential, light curves sampled in multiple
bands every few days over the course of a few months are required. This is
essential to search for systematic differences in SN populations
(e.g., due to differing progenitor channels), which
may masquerade as cosmological effects, as well as to determine photometric
redshifts from the SNe themselves. Unlike other cosmological probes,
even a single object gives information on the relationship between
redshift and distance. Thus, a large
number of SNe across the sky allows one to search for any dependence
of dark energy properties on direction, which
would be an indicator of new physics. The results from this method can be compared
with similar measures of anisotropy from the combination of WL and LSS
\citep{2009ApJ...690..923Z}.
Given the expected SN flux distribution
at the redshifts where dark energy is important, the
single-visit depth should be at least $r\sim24$. Good image quality is
required to separate SN photometrically from
their host galaxies. Observations in at least five photometric bands will allow
proper $K$-corrected light curves to be measured over a range of
redshift. Carrying out these $K$-corrections requires that the
calibration of the relative offsets in photometric zero-points between filters and
the system response functions, especially near the edges of
bandpasses, be accurate to about 1\% \citep{2007ApJ...666..694W},
similar to the requirements from photometric redshifts of galaxies. Deeper data
($r>26$) for small areas of the sky can extend the discovery of SNe to a mean
redshift of 0.7 (from $\sim0.5$ for the main survey), with some objects beyond $z\sim$1
\citep[][SciBook, Chap.~11]{2004AAS...20510818G,2004AAS...20510820P}. The added statistical leverage
on the ``pre-acceleration'' era ($z\ga1$) would improve constraints on the properties of
dark energy as a function of redshift.
Finally, there will be powerful cross-checks and complementarities with other planned or
proposed surveys, such as \textit{Euclid} \citep{2011arXiv1110.3193L} and \textit{WFIRST}
\citep{2015arXiv150303757S}, which will provide
%\footnote{\url{http://wfirst.gsfc.nasa.gov}}
wide-field optical-IR imaging from space;
DESI \citep{2013arXiv1308.0847L}
%\footnote{\url{http://desi.lbl.gov}}
and PFS \citep{2014PASJ...66R...1T}, which will measure
spectroscopic baryon acoustic oscillations (BAOs) with
millions of galaxies; and SKA\footnote{\url{https://www.skatelescope.org}} (radio).
Large survey volumes are key to probing dynamical dark energy models (with subhorizon
dark energy clustering or anisotropic stresses). The cross-correlation
of the three-dimensional
mass distribution -- as probed by neutral hydrogen in CHIME \citep{2014SPIE.9145E..4VN}, HIRAX \citep{2016SPIE.9906E..5XN}
or SKA, or galaxies in DESI and PFS -- with the gravitational growth
probed by tomographic shear in LSST will be a complementary way to constrain dark energy
properties beyond simply characterizing its equation of state and to test the underlying theory of gravity.
Current and future ground-based CMB experiments, such as Advanced ACT \citep{2016SPIE.9910E..14D},
SPT-3G \citep{2014SPIE.9153E..1PB}, Simons Observatory, and CMB Stage-4 \citep{2016arXiv161002743A}, will also offer invaluable opportunities for
cross-correlations with secondary CMB anisotropies.
\subsubsection{Taking an Inventory of the Solar System}
The small-body populations in the solar system, such as asteroids, trans-Neptunian objects (TNOs),
and comets, are remnants of its early assembly. The history of accretion, collisional grinding, and
perturbation by existing and vanished giant planets is preserved in the orbital elements and size
distributions of those objects. Cataloging the orbital parameters, size distributions, colors, and light
curves of these small-body populations requires a large number of observations in multiple filters
and will lead to insights into planetary formation and evolution by providing the basis and constraints
for new theoretical models. In addition, collisions in the main asteroid belt between Mars and Jupiter
still occur and occasionally eject objects on orbits that may place them on a collision course with Earth.
Studying the properties of main belt asteroids at subkilometer sizes is important for linking the near-Earth
object (NEO) population with its source in the main belt. About 20\% of NEOs, the potentially hazardous
asteroids (PHAs), are in orbits that pass sufficiently close to Earth's orbit, to within 0.05\,au, that perturbations
on time scales of a century can lead to the possibility of collision. In 2005 December,
the US Congress directed\footnote{For details see \url{http://neo.jpl.nasa.gov/neo/report2007.html}} NASA to
implement a survey that would catalog 90\% of NEOs with diameters larger than 140\,m by 2020.
Discovering and linking objects in the solar system moving with a wide range of apparent velocities (from
several degrees per day for NEOs to a few arcseconds per day for the most distant TNOs) places strong
constraints on the cadence of observations, requiring closely spaced pairs of observations (two or preferably
three times per lunation) in order to link detections unambiguously and derive orbits (SciBook, Chap.~5). Individual
exposures should be shorter than about 30\,s to minimize the effects of trailing for the majority of
moving objects. The images must be well sampled to enable accurate astrometry, with absolute accuracy of at
least 0\farcs1 in order to measure orbital parameters of TNOs with enough precision to constrain theoretical
models and enable prediction of occultations. The photometric calibration should be
better than 1--2\% to measure asteroids' colors and thus determine
their types. If possible, the different filters
should be observed over a short time span to reduce apparent
variations in color due to changes in observing geometry, but they should
be repeated over many lunations in order to determine phase curves and allow shape modeling.
The congressional mandate can be fulfilled with a 10\,m class
telescope equipped with a multi-gigapixel camera and a sophisticated
and robust data processing system \citep{2007IAUS..236..353I}. The images should reach a depth of at
least 24.5 (5$\sigma$ for point sources) in the $r$ band to reach high
completeness down to the 140\,m mandate for NEOs. Such an instrument
would probe the $\sim$100\,m size range at main belt distances and
discover rare distant TNOs such as Sedna \citep{2004ApJ...617..645B}
and 2012 VP113 \citep{2014Natur.507..471T}.
\subsubsection{Exploring the Transient Optical Sky}
Recent surveys have shown the power of measuring variability of
celestial sources for
studying gravitational lensing, searching for SNe, determining
the physical properties of gamma-ray burst sources, discovering
gravitational wave counterparts, probing the structure of active
galactic nuclei (AGNs), studying variable star populations, discovering
exoplanets, and many other subjects at the forefront of astrophysics
\citep[SciBook, Chap.~8;][]{2009PASP..121.1395L,2012IAUS..285..141D,2014ApJ...784...45R}.
Time domain science has diverse requirements for transient and
variable phenomena that are physically and phenomenologically
heterogeneous. It requires large-area coverage to enhance the probability
of detecting rare events; good image quality to enable differencing of
images, especially in crowded fields; good time sampling, necessary to
distinguish different types of variables and to infer
their properties (e.g., determining the intrinsic peak luminosity of Type Ia
SNe requires measuring their light-curve shape); accurate
color information to classify variable objects;
long-term
persistent observations to characterize slow-evolving transients
(e.g., tidal disruption events, superluminous SNe at high
redshift, and luminous blue variables [LBVs]); and rapid data reduction,
classification, and reporting to the community to allow immediate
follow-up with spectroscopy, further optical photometry, and imaging in other
wavebands.
Wide-area, dense temporal coverage to deep limiting magnitudes will
enable the discovery and analysis of rare and exotic objects such as
neutron stars and black hole binaries, novae and stellar flares,
gamma-ray bursts and X-ray flashes, AGNs, stellar
disruptions by black holes \citep{2011Sci...333..203B,2012Natur.485..217G},
and possibly new classes of transients, such as binary mergers of
supermassive black holes \citep{2008ApJ...682..758S},
chaotic
eruptions on stellar surfaces \citep{2011ApJ...741...33A}, and, further yet, completely
unexpected phenomena.
Such a survey would likely detect microlensing by stars and compact objects in
the Milky Way, but also in the Local Group and perhaps beyond \citep{2008A&A...478..755D}.
Given the duration of the LSST, it will also be possible
to detect the parallax microlensing signal of intermediate-mass black holes and
measure their masses \citep{1992ApJ...392..442G}. This would open the possibility of
discovering populations of binaries and planets via transits
\citep[e.g.,][]{2006Natur.439..437B,2010arXiv1009.3048D,2013ApJ...768..129C,2014ApJ...780...54B},
as well as obtaining spectra of lensed stars in distant galaxies.
A deep and persistent survey will discover precursors of explosive and
eruptive transients and generate large samples of transients whose study
has thus far been limited by small sample size (e.g., different
subtypes of core-collapse SNe; \citealt{2014ApJS..213...19B}).
Time series ranging between 1-minute and 10\,yr cadence should be
probed over a significant fraction of the sky. The survey's cadence
will be sufficient, combined with the large coverage, to
serendipitously catch very short-lived events, such as eclipses in
ultracompact double-degenerate binary systems \citep{2005AJ....130.2230A};
to constrain the properties of fast faint transients (such as optical
flashes associated with gamma-ray bursts; \citealt{2008AN....329..284B}); to
detect electromagnetic counterparts to gravitational wave sources
\citep{2013ApJ...767..124N,2018ApJ...852L...3S}; and to further constrain the
properties of new classes of transients discovered by programs such as
the Deep Lens Survey \citep{2004ApJ...611..418B}, the Catalina Real-time
Transient Survey \citep{2009ApJ...696..870D}, the Palomar Transient Factory
\citep{2009PASP..121.1395L}, and the Zwicky Transient Factory \citep{2014htu..conf...27B}. Observations
over a decade will enable the study of long-period variables, intermediate-mass
black holes, and quasars \citep{2007ApJ...659..997K,2010ApJ...721.1014M,2014MNRAS.439..703G,2016JCAP...11..042C}.
The next frontier
in this field will require measuring the colors of fast transients
and probing variability at faint magnitudes. Classification of transients in
close-to-real time will require access to the full photometric history
of the objects, both before and after the transient event
\citep[e.g.,][]{2011BASI...39..387M}.
\subsubsection{Mapping the Milky Way}
A major challenge in extragalactic cosmology today concerns the formation of structure on subgalactic scales, where
baryon physics becomes important, and the nature of dark matter may manifest itself in observable ways \citep[e.g.][]{2015PNAS..11212249W}.
The Milky Way and its environment provide a unique dataset for understanding the detailed processes that
shape galaxy formation and for testing the small-scale predictions of
our standard cosmological model. New insights into the nature and
evolution of the Milky Way will require wide-field surveys to constrain
its structure and accretion history. Further insights into the stellar
populations that make up the Milky Way can be gained with a comprehensive census of the stars
within a few hundred parsecs of the Sun.
Mapping the Galaxy requires large-area coverage; excellent image
quality to maximize photometric and astrometric accuracy,
especially in crowded fields; photometric precision of at least 1\% to
separate main-sequence and
giant stars \cite[e.g.,][]{2003ApJ...586..195H}, as well as to identify variable
stars such as RR Lyrae \citep{2010ApJ...708..717S,2011ApJ...728..106S};
and systematic astrometric errors not exceeding about 10 mas per observation to enable parallax and proper-motion measurements
(SciBook, Chaps.~6 and 7). In order to probe the halo out to its presumed edge at $\sim100$ kpc \citep{2004ASPC..327..104I}
with main-sequence stars, the total co-added depth must reach $r > 27$, with a similar depth in the $g$ band.
The metallicity distribution of stars can be studied photometrically in the Sgr tidal stream
\cite[see, e.g.,][]{2003ApJ...599.1082M,2007ApJ...670..346C} and other halo substructures
($\sim 30$\,kpc; \citealt{2007Natur.450.1020C}), yielding new insights into how
they formed. Our ability to measure these metallicities is limited by
the co-added depth in the $u$ band; to probe the outer parts of the
stellar halo, one must reach
$u\sim24.5$. To detect RR Lyrae stars beyond the Galaxy's tidal radius at $\sim 300$ kpc, the single-visit depth must
be $r \sim 24.5$.
In order to measure the tangential velocity of stars at a distance of 10 kpc, where the halo dominates over the disk, to
within 10 km s$^{-1}$ (comparable to the accuracy of large-scale radial velocity surveys), the proper-motion
accuracy should be 0.2\,mas,yr$^{-1}$ or better. This value was also chosen to approximately match the accuracy
anticipated for the \textit{Gaia} mission\footnote{\url{http://sci.esa.int/gaia/}} \citep{2001A&A...369..339P,2012Ap&SS.341...31D}
at its faint limit ($r \sim 20$). Recent results from \textit{Gaia} Data Release 2 (DR2) demonstrate that
these early predictions of the mission performance were robust \citep{2018A&A...616A...1G,2018A&A...616A...2L}.
In order to measure distances to solar neighborhood stars out to a distance of 300\,pc (the thin-disk scale height),
with geometric distance accuracy of at least 30\%, trigonometric parallax measurements accurate to 1 mas ($1\sigma$)
are required over 10\,yr. To achieve the required proper-motion and parallax accuracy with an assumed astrometric
accuracy of 10 mas per observation per coordinate, approximately 1000
separate observations are required. This requirement for a large
number of observations is similar to that from minimizing
systematics in WL observations (\S~\ref{sec:Dark_Energy}).
%\subsubsection{A Summary and Synthesis of Science-driven Constraints on Data Properties}
\begin{figure}
\includegraphics[width=1.0\hsize,clip]{seeing2}
\caption{
Image quality distribution measured at the Cerro Pach\'{o}n site using
a differential image motion monitor (DIMM) at $\lambda$ = 500 nm, and corrected
using an outer scale parameter of 30 m over an 8.4 m aperture. For details
about the outer scale correction see \citet{2002PASP..114.1156T}. The observed distribution
is well described by a lognormal distribution, with the parameters shown in
the figure.}
\label{Fig:seeing}
\end{figure}
\subsubsection{A Summary and Synthesis of Science-driven Constraints on Data Properties}
The goals of all the science programs discussed above
(and many more, of course) can be accomplished by satisfying the
minimal constraints listed below. For a more elaborate listing
of various constraints, including detailed specification of
various probability density distribution functions, please see the LSST SRD \citep{LPM-17}
and the LSST Science Book \citep{2009arXiv0912.0201L}.
\begin{enumerate}
\item \emph{The single-visit depth} should reach $r\sim24.5$. This limit is
primarily driven by the search for NEOs and variable sources (e.g., SNe,
RR Lyrae stars) and by proper-motion and trigonometric parallax
measurements for stars. Indirectly, it is also driven by the
requirements on the co-added survey depth and the minimum number of
exposures required by WL science. We plan to split a single visit
into two exposures of equal length to identify and remove cosmic
rays.
\item \emph{Image quality} should maintain the limit set by the
atmosphere (the median free-air seeing is 0\farcs65 in the $r$ band
at the chosen site; see Figure~\ref{Fig:seeing}),
and not be degraded appreciably by the hardware. In addition to stringent
constraints from WL, good image quality is driven by the
required survey depth for point sources and by image differencing
techniques.
\item \emph{Photometric repeatability} should achieve 5 mmag precision
at the bright end, with zero-point stability across the sky of 10 mmag
and band-to-band calibration errors not larger than 5 mmag.
These requirements are driven by the need for high photometric redshift accuracy,
the separation of stellar populations, detection of low-amplitude variable
objects (such as eclipsing planetary systems), and the search for
systematic effects in SN Ia supernova light curves.
\item \emph{Astrometric precision} should maintain the systematic limit set by
the atmosphere, of about 10 mas per visit at the bright end
(on scales below 20 arcmin). This precision is driven by the desire to
achieve a proper-motion accuracy of 0.2 mas yr$^{-1}$ and parallax accuracy of
1.0 mas over the course of a 10\,yr survey (see \S~\ref{sec:astrom}).
\item \emph{The single-visit exposure time}
should be less than about a minute
to prevent trailing of fast-moving objects and to aid control
of various systematic effects induced by the atmosphere. It should
be longer than $\sim$20\,s to avoid significant efficiency losses due to
finite readout, slew time, and read noise. As described above, we
are planning to split each visit into two exposures.
\item \emph{The filter complement} should include at least six filters
in the wavelength range limited by atmospheric absorption and
silicon detection efficiency (320--1050\,nm), with roughly
rectangular filters and no large gaps in the coverage, in order
to enable robust and accurate photometric redshifts and stellar typing. An
SDSS-like $u$ band \citep{1996AJ....111.1748F} is extremely important for separating
low-redshift quasars from hot stars and for estimating the metallicities of
F/G main-sequence stars. A bandpass with an effective wavelength of
about 1\,\micron\ would enable studies of substellar objects, high-redshift
quasars (to redshifts of $\sim$7.5), and regions of the Galaxy that are obscured
by interstellar dust.
\item \emph{The revisit time distribution} should enable determination of
orbits of solar system objects and sample SN light curves every few days,
while accommodating constraints set by proper-motion and trigonometric
parallax measurements.
\item \emph{The total number of visits} of any given area of sky, when accounting for all
filters, should be of the order of 1000, as mandated by WL
science, the search for NEOs, and proper-motion and
trigonometric parallax measurements. Studies of transient sources
also benefit from a large number of visits.
\item \emph{The co-added survey depth} should reach
$r\sim27.5$, with sufficient signal-to-noise ratio (S/N) in other bands
to address both extragalactic and Galactic science drivers.
\item \emph{The distribution of visits per filter} should enable
accurate photometric redshifts, separation of stellar populations,
and sufficient depth to enable detection of faint extremely red
sources (e.g., brown dwarfs and high-redshift quasars). Detailed simulations of
photometric redshift uncertainties
suggest roughly similar number of visits among bandpasses
(but because the system throughput and atmospheric properties are
wavelength dependent, the achieved depths are different in different
bands). The adopted time allocation
(see Table~\ref{tab:baseline}) includes a slight preference to the $r$ and $i$ bands because of their
dominant role in star/galaxy separation and WL measurements.
\item \emph{The distribution of visits on the sky} should extend over
at least $\sim$18,000\,deg$^2$ to obtain the required number of galaxies
for WL studies, with attention paid to include ``special''
regions such as the ecliptic and Galactic planes, and the Large and Small
Magellanic Clouds (if in the Southern Hemisphere). For comparison,
the full area that can be observed at air mass less than 2.0 from
any midlatitude site is about 30,000\,deg$^2$.
\item \emph{Data processing, data products, and data access} should
result in data products that approach the statistical uncertainties
in the raw data, i.e., the processing must be close to optimal.
To enable fast and efficient response to
transient sources, the processing latency for variable sources should be less than a minute,
with a robust and accurate preliminary characterization
of all reported variables.
\end{enumerate}
Remarkably, even with these joint requirements, none of the
individual science programs are severely overdesigned, i.e., despite
their significant scientific diversity, these programs are highly
compatible in terms of desired data characteristics. Indeed, any one
of the four main science drivers could be removed, and the remaining
three would still yield very similar requirements for most system
parameters. As a result, the LSST system can adopt a highly
efficient survey strategy in which \textit{a single dataset serves most science
programs} (instead of science-specific surveys executed in series).
One can view this project as \textit{massively parallel astrophysics}.
The vast majority (about 90\%) of the observing time will be devoted to
a deep-wide-fast survey mode of the sort we have just described, with
the remaining 10\%
allocated to special programs that will also address multiple science
goals. Before describing these surveys in detail, we discuss the main
system parameters.
\begin{deluxetable}{l|l}[t]
\tablecaption{The LSST Baseline Design and Survey Parameters\label{tab:baseline}}
\tablehead{
\colhead{Quantity} & \colhead{Baseline Design Specification}
}
\startdata
Optical config. & Three-mirror modified Paul-Baker \\
Mount config. & Alt-azimuth \\
Final $f$-ratio, aperture & $f$/1.234, 8.4 m \\
Field of view, \'etendue & 9.6 deg$^2$, 319 m$^2$deg$^2$ \\
Plate scale & 50.9 \micron\,arcsec$^{-1}$ (0\farcs2 pixels) \\
Pixel count & 3.2 gigapixels \\
Wavelength coverage & 320 -- 1050 nm, $ugrizy$ \\
Single-visit depths, design\tablenotemark{a} & 23.9, 25.0, 24.7, 24.0, 23.3, 22.1 \\
Single-visit depths, min.\tablenotemark{b} & 23.4, 24.6, 24.3, 23.6, 22.9, 21.7 \\
Mean number of visits\tablenotemark{c} & 56, 80, 184, 184, 160, 160 \\
Final (co-added) depths\tablenotemark{d} & 26.1, 27.4, 27.5, 26.8, 26.1, 24.9 \\
\enddata
\tablenotetext{a}{Design specification from the Science Requirements Document \citep[SRD;][]{LPM-17} for 5$\sigma$ depths
for point sources in the $ugrizy$ bands, respectively. The listed
values are expressed on the AB magnitude
scale and correspond to point sources and fiducial zenith observations (about 0.2 mag loss of depth
is expected for realistic air-mass distributions; see Table~\ref{tab:eqparams} for more details).}
\tablenotetext{b}{Minimum specification from the Science Requirements Document for 5$\sigma$ depths.}
\tablenotetext{c}{An illustration of the distribution of the number of visits as a function of bandpass,
taken from Table 24 in the SRD.}
\tablenotetext{d}{Idealized depth of co-added images, based on design specification for 5$\sigma$ depth and
the number of visits in the penultimate row (taken from Table 24 in the SRD).}
\end{deluxetable}
\subsection{The Main System Design Parameters}
Given the minimum science-driven constraints on the data properties listed
in the previous section, we now discuss how they are translated into
constraints on the main system design parameters: the aperture size,
the survey lifetime, the optimal exposure time, and the filter complement.
\subsubsection{The Aperture Size }
\label{Sec:apSize}
The product of the system's \'etendue and the survey lifetime, for given
observing conditions, determines
the sky area that can be surveyed to a given depth.
The
LSST field-of-view area is maximized to its practical limit, $\sim$10\,deg$^2$,
determined by the requirement that the delivered image quality be dominated
by atmospheric seeing at the chosen site (Cerro Pach\'{o}n in northern Chile).
A larger field of view would lead to unacceptable deterioration of the
image quality. This constraint leaves the primary mirror diameter and survey lifetime
as free parameters. The adopted survey lifetime of 10\,yr is a compromise
between a shorter time that leads to an excessively large and expensive mirror (15\,m for a
3\,yr survey and 12\,m for a 5\,yr survey) and not as effective proper-motion
measurements, and a smaller telescope, which would require more time to complete the
survey, with the associated increase in operations cost.
The primary mirror size is a function of the required survey depth and the
desired sky coverage. By and large, the anticipated science outcome scales
with the number of detected sources. For practically all astronomical source
populations, in order to maximize the number of detected sources, it is more
advantageous to maximize first the area and then
the detection depth\footnote{
If the total exposure time is doubled and used to double the survey area,
the number of sources increases by a factor of two. If the survey
area is kept fixed, the increased exposure time will result in
$\sim$0.4 mag deeper data (see eq.~\ref{m5}). For cumulative source
counts described by $\log(N) = C + k*m$, the number of sources
will increase by more than a factor of two only if $k>0.75$.
Apart from $z<2$ quasars, practically all populations
have $k$ at most 0.6 (the Euclidean value), and faint stars
and galaxies have $k<0.5$. For more details, please see \citet{2003AJ....125.2740N}.}.
For this reason, the sky area for the main survey is
maximized to its practical limit, 18,000\,deg$^2$, determined by the
requirement to avoid air masses less than 1.5,
which would substantially
deteriorate the image quality and the survey depth (see eq.~\ref{m5}).
With the adopted field-of-view area, the sky coverage, and the survey lifetime
fixed, the primary mirror diameter is fully driven by the required survey
depth. There are two depth requirements: the final (co-added) survey depth,
$r\sim27.5$, and the depth of a single visit, $r\sim24.5$. The two
requirements are compatible if the number of visits is several hundred
per band, which is in good agreement with independent science-driven
requirements on the latter.
The required co-added survey depth provides a direct constraint,
independent of the details of survey execution such as the exposure time per visit,
on the minimum effective primary mirror diameter of 6.4 m, as illustrated in
Figure~\ref{Fig:coaddDepth}.
\subsubsection{The Optimal Exposure Time }
The single-visit depth depends on both the primary mirror diameter and the
chosen exposure time, $t_\mathrm{vis}$. In turn, the exposure time
determines the time interval to revisit a given sky position and the total
number of visits, and each of these quantities has its own science
drivers. We summarize these simultaneous constraints in terms of the
single-visit exposure time:
\begin{itemize}
\item The single-visit exposure time should not be longer than about a minute to
prevent trailing of fast solar system moving objects and to enable efficient
control of atmospheric systematics.
\item The mean revisit time (assuming uniform cadence) for a given position
on the sky, $n$, scales as
\begin{equation}
\label{eqn:revisit}
n = \left( {t_\mathrm{vis} \over 10 \, \mathrm{sec}} \right)
\left( { A_\mathrm{sky} \over 10,000 \, \mathrm{deg}^2} \right)
\left( {10 \, \mathrm{deg}^2 \over A_\mathrm{FOV}} \right) \mathrm{days},
\end{equation}
where two visits per night are assumed (required for efficient detection of
solar system objects; see below) and the losses for realistic observing conditions
have been taken into account (with the aid of the Operations Simulator described below).
Science drivers such as SN light curves and moving objects in the solar system require
that $n<4$ days, or equivalently $t_\mathrm{vis} < 40$\,s for the nominal values
of $A_\mathrm{sky} $ and $A_\mathrm{FOV}$.
\item The number of visits to a given position on the sky, $N_\mathrm{visit}$,
with losses for realistic observing conditions taken into account,
is given by
\begin{equation}
\label{eqn:nvisits}
N_\mathrm{visit} = \left( {3000 \over n} \right)
\left( { T \over 10 \, \mathrm{yr}} \right).
\end{equation}
The requirement $N_\mathrm{visit}>800$ again implies that $n<4$ and
$t_\mathrm{vis} < 40$\,s if the survey lifetime, $T$ is about 10\,yr.
\item These three requirements place a firm upper limit on the
optimal visit exposure time of $t_\mathrm{vis} < 40$\,s. Surveying
efficiency (the ratio of open-shutter time to the total
time spent per visit) considerations place a lower limit on
$t_\mathrm{vis}$ due to finite detector readout and telescope slew time (the longest
acceptable readout time is set to 2\,s, the shutter open-and-close
time is 2\,s, and the slew-and-settle time is set to 5\,s, including
the readout time for the second exposure in a visit):
\begin{equation}
\label{eqn:epsilon}
\epsilon = \left( {t_\mathrm{vis} \over t_\mathrm{vis} + 9 \, \mathrm{sec}}\right).
\end{equation}
To maintain efficiency losses below $\sim$30\% (i.e., at least below the
limit set by the weather patterns), and to minimize the read noise
impact, $t_\mathrm{vis} > 20$\,s is required.
\end{itemize}
Taking these constraints simultaneously into account, as summarized in
Figure~\ref{Fig:singleDepth},
yielded the following reference design:
\begin{enumerate}
\item A primary mirror effective diameter of $\sim$6.5\,m. With the adopted optical
design, described below, this effective diameter corresponds to a geometrical diameter
of $\sim$8\,m. Motivated by characteristics of the existing equipment at the
Steward Mirror Laboratory, which fabricated the primary mirror, the adopted
geometrical diameter is set to 8.4\,m.
\item A visit exposure time of 30\,s (using two 15\,s exposures
to efficiently reject cosmic rays; the possibility of a single exposure per visit,
to improve observing efficiency, will be investigated during the commissioning phase),
yielding $\epsilon=77$\%.
\item A revisit time of 3 days on average for 10,000\,deg$^2$ of sky,
with two visits per night.
\end{enumerate}
\begin{figure}[t]
\includegraphics[width=\hsize,clip]{coaddedDepth}
\caption{Co-added depth in the $r$ band (AB magnitudes) vs. the effective aperture and
the survey lifetime. It is assumed that 22\% of the total observing time (corrected for
weather and other losses) is allocated for the $r$ band and that the ratio of
the surveyed sky area to the field-of-view area is 2000.}
\label{Fig:coaddDepth}
\end{figure}
\begin{figure}[t]
\includegraphics[width=1.0\hsize,clip]{singleDepth}
\caption{Single-visit depth in the $r$ band (5$\sigma$ detection for
point sources, AB magnitudes) vs. revisit time, $n$ (days), as a function of
the effective aperture size. With a coverage of 10,000\,deg$^2$ in two bands,
the revisit time directly constrains the visit exposure time, $t_\mathrm{vis}=10n$\,s.
In addition to direct constraints on optimal exposure time, $t_\mathrm{vis}$
is also driven by requirements on the revisit time, $n$, the total number of visits
per sky position over the survey lifetime, $N_\mathrm{visit}$, and the survey efficiency,
$\epsilon$ (see Equations (\ref{eqn:revisit})--(\ref{eqn:epsilon})). Note that these constraints result in a fairly narrow range of
allowed $t_\mathrm{vis}$ for the main deep-wide-fast survey.}
\label{Fig:singleDepth}
\end{figure}
To summarize, the chosen primary mirror diameter is the \textit{minimum}
diameter that simultaneously satisfies the depth ($r\sim24.5$ for single visit and
$r\sim27.5$ for co-added depth) and cadence (revisit time of 3--4 days,
with 30\,s per visit) constraints described above.
\subsection{System Design Trade-offs}
We note that the Pan-STARRS project \citep{2002SPIE.4836..154K,2010SPIE.7733E..0EK}, with similar science
goals to LSST, envisions a distributed aperture design, where the total
system \'etendue is
a sum of \'etendue values for an array of small 1.8\,m telescopes\footnote{The
first of these telescopes, PS1, has been operational for some time \citep{2016arXiv161205560C}, and
has an \'etendue 1/24 that of LSST. }.
Similarly, the LSST system could perhaps be made as two smaller copies with
6\,m mirrors, or four copies with 4\,m mirrors, or 16 copies with 2\,m mirrors. Each
of these clones would have to have its own 3-gigapixel camera (see below), and
given the added risk and complexity (e.g., maintenance, data processing), the monolithic
design seems advantageous for a system with such a large \'etendue as LSST.
It is informative to consider the trade-offs that would be required
for a system with a smaller aperture, if the science requirements were
to be maintained. For this comparison, we consider a four-telescope version of
the Pan-STARRS survey (PS4). With an \'etendue about 6 times smaller
than that of LSST (effective diameters of 6.4\,m and 3.0\,m, and a field-of-view area
of 9.6\,deg$^2$ vs.\ 7.2\,deg$^2$), and all observing conditions being equal,
the PS4 system could in principle use a cadence identical to that of LSST. The
main difference in the datasets would be a faint limit shallower by about
1\,mag in a given survey lifetime. As a result, for Euclidean populations the
sample sizes would go down by a factor of 4, while for populations of
objects with a shallower slope of the number--magnitude relation (e.g.,
galaxies around a redshift of 1) the samples would be smaller by a factor of 2--3.
The distance limits for nearby sources, such as Milky Way stars, would drop to
60\% of their corresponding LSST values, and the NEO completeness level mandated by
the US Congress would not be reached.
If instead the survey co-added depth were to be maintained, then the survey sky
area would have to be 6 times smaller ($\sim$3500\,deg$^2$). If the
survey single-visit depth were to be maintained, then the exposure
time would have to be about 6 times longer (ignoring the slight difference
in the field-of-view area and simply scaling by the \'etendue ratio),
resulting in non-negligible trailing losses for solar system objects
and either
(i) a factor of six smaller sky area observed within $n=3$ days, or
(ii) the same sky area revisited every $n=18$ days.
Given these conflicts, one solution would be to split the observing time and
allocate it to individual specialized programs (e.g., large sky area vs.\
deep co-added data vs.\ deep single-visit data vs. small-$n$ data, etc.),
as is being done by the PS1 Consortium\footnote{More information about
Pan-STARRS is available from \url{http://pswww.ifa.hawaii.edu/pswww/}.}.
In summary,
given the science requirements as stated here, there is a
minimum \'etendue of $\sim$300\,deg$^2$m$^2$, which enables our seemingly
disparate science goals to be addressed with a single dataset.
A system with a smaller \'etendue would require separate specialized surveys
to address the science goals, which results in a loss of surveying
efficiency\footnote{The converse is also true: for every \'etendue
there is a set of optimal science goals that such a system can
address with a high efficiency.}. The LSST is designed to reach this
minimum \'etendue for the science goals stated in its SRD.
\subsection{ The Filter Complement }
The LSST filter complement ($ugrizy$, see Figure~\ref{Fig:filters}) is modeled after the
SDSS system \citep{1996AJ....111.1748F} because of its demonstrated success in a wide
variety of applications, including photometric redshifts of galaxies \citep{2003ApJ...595...59B},
separation of stellar populations \citep{1998ApJS..119..121L,2003ApJ...586..195H},
and photometric selection of quasars \citep{2002AJ....123.2945R,2012ApJS..199....3R}. The extension of the
SDSS system to longer wavelengths
(the $y$ band at $\sim$1\,\micron) is driven by the increased effective redshift
range achievable with the LSST, due to deeper imaging; the desire to study substellar
objects, high-redshift quasars, and regions of the Galaxy that are obscured by
interstellar dust; and
the scientific opportunity enabled by modern CCDs with high quantum efficiency
in the near-infrared (NIR).
\begin{figure}
\includegraphics[width=1.0\hsize,clip]{filters_y4}
\caption{The LSST bandpasses. The vertical axis shows the total throughput. The computation
includes the atmospheric transmission (assuming an air mass of 1.2;
dotted line), optics, and the detector sensitivity.}
\label{Fig:filters}
\end{figure}
The chosen filter complement corresponds to a design ``sweet spot.'' We have
investigated the possibility of replacing the $ugrizy$ system with a
filter complement that includes only five filters. For example, each filter
width could be increased by 20\% over the same wavelength range (neither a
shorter wavelength range nor gaps in the wavelength coverage are desirable
options), but this option is not satisfactory. Placing the red edge of the $u$
band blueward of the Balmer break allows optimal separation of stars and
quasars, and the telluric water absorption feature at 9500\,\AA\
effectively defines the blue edge of the $y$ band. Of the remaining four
filters ($griz$), the $g$ band is already quite wide. As a last option, the
$riz$ bands could be redesigned as two wider bands. However, this option is also
undesirable because the $r$ and $i$ bands are the primary bands for WL
studies and for star/galaxy separation, and atmospheric dispersion
would worsen the PSF for a wider bandpass (e.g., at air mass of
1.3, the typical dispersion in the $u$, $g$ and $r$ bands is 0\farcs55,
0\farcs46, and 0\farcs19, respectively; if the bandpass width increased
by 50\%, the dispersion would increase by a similar factor). The effects of
atmospheric dispersion on WL studies are mitigated by modeling
the PSF as a function of the color of the object (for more
details, see \citealt{2015ApJ...807..182M, 2018MNRAS.479.1491C}).
\begin{figure}
\includegraphics[width=1.0\hsize,clip]{modtran1}
\caption{Example of determination of the atmospheric opacity by
simultaneously fitting a three-parameter stellar model SED \citep{1979ApJS...40....1K} and
six physical parameters of a sophisticated atmospheric model \citep[MODTRAN;][]{1999SPIE.3866....2A}
to an observed F-type stellar spectrum ($F_\lambda$). The black
line is the observed spectrum, and the red line is the best fit. Note that the
atmospheric water feature around 0.9--1.0 $\mu$m is exquisitely well fit.
The components of the best-fit atmospheric opacity are shown in
Figure~\ref{Fig:modtran2}. Adapted from \citet{2010ApJ...720..811B}.}
\label{Fig:modtran1}
\end{figure}
%\newpage
\subsection{The Calibration Methods}
\begin{figure}
\includegraphics[width=1.0\hsize,clip]{modtran2}
\caption{Components of the best-fit atmospheric opacity used to
model the observed stellar spectrum shown in Figure~\ref{Fig:modtran1}.
The atmosphere model \citep[MODTRAN;][]{1999SPIE.3866....2A} includes six
components: water vapor (blue), oxygen and other trace molecules
(green), ozone (red), Rayleigh scattering (cyan), a gray term
with a transmission of 0.989 (not shown), and an aerosol contribution
proportional to $\lambda^{-1}$ and extinction of 1.3\% at $\lambda$=0.675 $\mu$m
(not shown). The black line shows all six components combined.
Adapted from \citet{2010ApJ...720..811B}.}
\label{Fig:modtran2}
\end{figure}
Precise determination of the PSF across each image,
accurate photometric and astrometric calibration, and continuous monitoring
of system performance and observing conditions will be needed to reach the
full potential of the LSST mission. Extensive precursor data including the
SDSS dataset and our own data obtained using telescopes close to
the LSST site of Cerro Pach\'{o}n (e.g., the SOAR and Gemini South telescopes),
as well as telescopes of similar aperture (e.g., Subaru), indicate that the
photometric and astrometric accuracy will be limited not by our instrumentation
or software, but rather by atmospheric effects.
The overall photometric calibration philosophy \citep{2006ApJ...646.1436S} is to measure explicitly, at 1 nm resolution, the
instrumental sensitivity as a function of wavelength using light from a monochromatic source injected
into the telescope pupil. The dose of delivered photons is measured using a calibration photodiode whose quantum
efficiency is known to high accuracy. In addition, the LSST system will explicitly measure the atmospheric transmission
spectrum associated with each image acquired. A
dedicated 1.2\,m auxiliary calibration telescope will obtain spectra of
standard stars in LSST fields, calibrating the atmospheric throughput
as a function of wavelength \citep[][see Figures \ref{Fig:modtran1} and \ref{Fig:modtran2}]{2007PASP..119.1163S}.
The LSST auxiliary telescope will take
data at lower spectral resolution ($R \sim 150$) but wider spectral
coverage (340\,nm --- 1.05\,\micron) than shown in these figures, using a
slitless spectrograph and an LSST corner-raft CCD.
Celestial spectrophotometric standard stars can be used as a separate means of photometric calibration, albeit only through the
comparison of band-integrated fluxes with synthetic photometry calculations.
A similar calibration process has been undertaken by the Dark Energy
Survey (DES) team, which has been approaching a calibration
precision of 5 mmag \citep{2018AJ....155...41B}.
SDSS, PS1, and DES data
taken in good photometric conditions have approached the LSST
requirement of 1\% photometric calibration
\citep{2008ApJ...674.1217P,2012ApJ...756..158S,2018AJ....155...41B}, although measurements with ground-based telescopes
typically produce data with errors a factor of two or so larger. Analysis of
repeated SDSS scans obtained in varying observing conditions demonstrates that data
obtained in
nonphotometric conditions can also be calibrated with
sufficient accuracy \citep{2007AJ....134..973I}, as long as high-quality
photometric data also exist in the region.
The LSST calibration plan builds on this experience gained from the SDSS and other surveys.
The planned calibration process decouples the establishment of a stable and uniform internal
relative calibration from the task of assigning absolute optical flux to
celestial objects.
Celestial sources will be used to refine the internal photometric system and
to monitor stability and uniformity of the photometric data. We expect to use \citet{2016A&A...595A...2G} photometry, utilizing
the \textit{BP} and \textit{RP} photometric measurements, as well as the \textit{G} magnitudes; for a subset
of stars (e.g., F subdwarfs) we expect to be able to transfer this rigid photometric system above
the atmosphere to objects observed by LSST.
There will be
$>$100 main-sequence stars with $17<r<20$ per detector (14$\times$14 arcmin$^2$)
even at high Galactic latitudes. Standardization of photometric scales will be
achieved through direct observation of stars with well-understood spectral
energy distributions (SEDs), in conjunction with the in-dome calibration system and the atmospheric transmission spectra.
Astrometric calibration will be based on the results from the \textit{Gaia} mission \citep{2018A&A...616A...2L}, which will provide
numerous high-accuracy astrometric standards in every LSST field.
\subsection{The LSST Reference Design}
We briefly describe the reference design for the main LSST system components.
Detailed discussion of the flow-down from science requirements to system
design parameters and extensive system engineering analysis can be
found in the LSST Science Book (Chaps.~2--3).
\begin{figure}
\includegraphics[width=1.0\hsize,clip]{mirrors}
\caption{LSST baseline optical design (modified three-mirror
Paul-Baker) with its unique
monolithic mirror: the primary and tertiary mirrors are positioned such
that they form a continuous compound surface, allowing them to be polished
from a single substrate.}
\label{Fig:optics}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\hsize,clip]{polishing}
\caption{The polishing of the primary--tertiary mirror pair at the Richard F.\ Caris Mirror Lab at the University of Arizona in Tucson. }
\label{Fig:polishing}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\hsize,clip]{observatoryFull}
\includegraphics[width=1.0\hsize,clip]{ObservatoryFull_2017}
\caption{Top: artist's rendering of the dome enclosure
with the attached summit support building on Cerro Pach\'{o}n. The LSST auxiliary
calibration telescope is shown on an adjacent rise to the right.
Bottom: photograph of the LSST Observatory as of summer 2017. Note the
different perspective from the artist's rendering. The main LSST
telescope building is on the right, waiting for the dome to be
installed. The auxiliary telescope building is on the left, with its
dome being installed.}
\label{Fig:observatory}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\hsize,clip]{TMA_Image-Oct-2017}
\caption{Baseline design for the
LSST telescope. The small focal ratio allows for a very squat
telescope, and thus a very stiff structure. }
\label{Fig:telescope}
\end{figure}
\subsubsection{ Telescope and Site}
The large LSST \'etendue is achieved in a novel three-mirror design
\citep[modified Paul-Baker Mersenne-Schmidt system;][]{2000ASPC..195...81A} with a very fast $f$/1.234 beam. The optical
design has been optimized to yield a large field of view (9.6 deg$^2$),
with seeing-limited image quality, across a wide wavelength band (320--1050
nm). Incident light is collected by an annular primary mirror, having
an outer diameter of 8.4 m and inner diameter of 5.0 m, creating an effective filled aperture of
$\sim$6.4 m in diameter once vignetting is taken into account. The
collected light is reflected to a 3.4 m convex secondary, then onto
a 5 m concave tertiary, and finally into the three refractive lenses of the camera (see Figure~\ref{Fig:optics}).
In broad terms, the primary--secondary mirror pair acts as a beam condenser, while the aspheric portion of
the secondary and tertiary mirror acts as a Schmidt camera. The three-element refractive optics of the camera
correct for the chromatic aberrations induced by the necessity of a thick Dewar window and flatten the
focal surface. During design optimization, the primary and tertiary mirror surfaces were placed such that the primary's
inner diameter coincides with the tertiary's outer diameter, thus making it possible to fabricate the mirror pair from a
single monolithic blank using spin-cast borosilicate technology (Figure~\ref{Fig:polishing}). The secondary mirror is fabricated from
a thin 100 mm thick meniscus substrate, made from Corning's ultra-low-expansion material. All
three mirrors will be actively supported to control wavefront distortions
introduced by gravity and environmental stresses on the telescope.
The primary--tertiary mirror was cast and polished
by the Richard F.\ Caris Mirror Lab at the University of Arizona in Tucson
before being inspected and accepted by LSST in 2015 April
\citep{2016SPIE.9906E..0LA}. The primary--tertiary mirror cell was
fabricated by CAID in Tucson and is undergoing acceptance tests. The
integration of the actuators and final tests with the mirror is
scheduled for early 2018.
The LSST Observing Facility (Figure~\ref{Fig:observatory}),
consisting of the telescope enclosure and summit support building, is being constructed atop Cerro Pach\'{o}n in northern Chile,
sharing the ridge with the Gemini South and SOAR telescopes\footnote{Coordinates listed in older versions
of this paper were incorrect. We thank E. Mamajek for pointing out this error to us.}
(the center of the telescope pier is located at latitude S 30$\arcdeg$14$\arcmin$40\farcs68,
longitude W 70$\arcdeg$44$\arcmin$57\farcs90, elevation 2652\,m;
\citealt{2012arXiv1210.1616M}). The telescope enclosure houses a compact, stiff
telescope structure (see Figure~\ref{Fig:telescope}) atop a 15\,m high concrete pier
with a fundamental frequency of 8\,Hz, which is crucial for achieving the required fast slew-and-settle times. The height of the pier was set to place the telescope above the degrading
effects of the turbulent ground layer. Capping the telescope
enclosure is a 30\,m diameter dome with extensive ventilation to reduce
dome seeing
and to maintain a uniform thermal environment over the course of the night. Furthermore, the summit support
building has been oriented with respect to the prevailing winds to shed its turbulence away from the
telescope enclosure. The summit support building includes a coating chamber for recoating the three LSST mirrors and
clean room facilities for maintaining and servicing the camera.
\subsubsection{ Camera }
The LSST camera provides a 3.2-gigapixel flat focal plane array, tiled by 189
4K$\,\times\,$4K CCD science sensors with 10\,$\mu$m pixels (see Figs.~\ref{Fig:camera}
and \ref{Fig:fov}). This pixel count is a direct consequence of sampling the
9.6\,deg$^2$ field-of-view (0.64\,m diameter) with 0.2$\,\times\,$0.2\,arcsec$^2$
pixels (Nyquist sampling in the best expected seeing of $\sim$0\farcs4).
The sensors are deep depleted high-resistivity silicon back-illuminated devices with
a highly segmented architecture that enables the entire array to be read in 2\,s.
The detectors are grouped into 3$\,\times\,$3 rafts (see Figure~\ref{Fig:raft}); each
contains its own dedicated electronics. The rafts are mounted on a silicon carbide
grid inside a vacuum cryostat, with a custom thermal control system that maintains
the CCDs at an operating temperature of around 173\,K. The entrance window to the
cryostat is the third (L3) of the three refractive lenses in the camera. The other
two lenses (L1 and L2) are mounted in an optics structure at the front of the camera
body, which also contains a mechanical shutter and a carousel assembly that holds
five large optical filters. The five filters in the camera can be changed in 90$-$120\,s,
depending on the initial camera rotator position. The sixth optical filter can
replace any of the five via a procedure accomplished during daylight hours.