NAOJ GW Elog Logbook 3.2

I plot the maps of the small sapphire sample we measured.

The first is the overall picture of the clean booth, from this picture you can see the three different parts Yuhang mentioned, the clean level increase from the closest to the furtherst.
The second and the third pictures was where we put the clean suits and gloves in the first clean booth, we are going to add another drawers next to the present one.
The fourth one is the shelf we put in the middle clean booth.
The last picture is the tube between the bench clean booth and the PR tank. We cut the wall of the clean booth with a cross-cutting from inside, the tube is fixed on the view port with a metal ring. Between the tube and the clean booth, we didn't put anything.

Last Thursday, the company came here to install our clean booth(three parts).
1. The first part is for in-air bench, it is high level clean.
2. The second part is for electronics and control.
3. The third one is for changing cllean suit.
After this installment, we cleaned everything would be put in and already in the room. We made also other changes.
1. We connected everything need to be connected. All the cables are gonging under the steps around the in-air bench.(Fig 1).
2. The Laser switch boxes are all under the in-air bench now.(Fig 2)
3. The control computer and transmission camera monitor are in the second part clean room now.(Fig 3)
Finally, we brought back the locking of our filter cavity both for green and infrared.
The first is the overall picture of the clean booth, from this picture you can see the three different parts Yuhang mentioned, the clean level increase from the closest to the furtherst.
The second and the third pictures was where we put the clean suits and gloves in the first clean booth, we are going to add another drawers next to the present one.
The fourth one is the shelf we put in the middle clean booth.
The last picture is the tube between the bench clean booth and the PR tank. We cut the wall of the clean booth with a cross-cutting from inside, the tube is fixed on the view port with a metal ring. Between the tube and the clean booth, we didn't put anything.

I put some other parameters of fitting here.
velocity | bandwith | r1(r2 is assumed as 1) | Finesse |
200Hz/s | 119Hz | 0.999251 | 4190 |
400Hz/s | 114Hz | 0.999279 | 4355 |
80Hz/s | 115Hz | 0.999272 | 4311 |
r1=0.9992673 +/- 1.19e-5
Finesse=4285.3 +/- 69.7

I have compared the transfer function measured in entry #693 and the error signals measured in the entry #699 with the Matlab model of the servo.
#1 plot: TF measured vs model. Eleonora's thesis model (PZT pole of 30 kHz).
#2 plot: TF measured vs model, where I changed the high frequency part of the TF in order ot fit the measurements. In particular, I have moved to higher frequency the frequency of the PTZ pole. This can be explained due to the fact that the PZT transfer fucntion is not really known, even if this is strange that we have to change the model vs Eleonora's thesis measurements.
#3 plot: TF measured vs model. I changed the high frequency part and also changed the frequency of the zeros at 1540 Hz to 1000 Hz. This is strange, since the frequency of these poles is given by the electronics, but maybe the coherence of the TF measuremnt around 1 kHz is not very high.
#4 plot: error signals measured vs model. In the model I have used a laser frequency noise of 7.5 kHz/f /sqrt(Hz), as measured during Eleonora's thesis. We remark that the error signals are higher than the model.
#5 plot: error signals divided by 2.5 vs model. The factor 2.5 is not explained. A wrong calibration factor? Some problems with data acquisition? An higher input noise?
As further measurement, I would suggest to save the correction signal of the PZT and maybe try to have a better measurement of the TF below 1 kHz.
We have measured the spectrum of the piezo correction, through the channel PZT mon.
In the plot we took into account the factor 100 of attuation of the channel PZT MON and we used the calibration 2 MHz/V.
The spectrum looks similar to that we measured in july. We fitted it with the curve 7.5 kHz/f, which is compatible with the expected free running laser noise.
I attach the .txt file with data not calibrated.

After using the correct Airy function, we can get a better fit of our filter cavity. It gives us the results as below.
velocity | bandwith |
200Hz/s | 119Hz |
400Hz/s | 114Hz |
80Hz/s | 115Hz |
According to this result, we can say our filter cavity's Bandwidth(for infrared) is 116 +/- 2.16Hz.
I put some other parameters of fitting here.
velocity | bandwith | r1(r2 is assumed as 1) | Finesse |
200Hz/s | 119Hz | 0.999251 | 4190 |
400Hz/s | 114Hz | 0.999279 | 4355 |
80Hz/s | 115Hz | 0.999272 | 4311 |
r1=0.9992673 +/- 1.19e-5
Finesse=4285.3 +/- 69.7

Actually, there is a factor 2 to take into accunt in the definition of the decay time we used, that is P = P0*exp(-2*t/tau)
(see https://www.osapublishing.org/oe/abstract.cfm?uri=oe-21-24-30114 )
So the decay time from the "hand cutting" fit should be: 2/tau = 3149 => tau = 0.6 ms. Anyway, since I used this definition also for computing the filter cavity decay time (about 2.7ms) if I'm not wrong we have a factor 5 of difference between the two, in any case.

According to the fit the decay time is 0.3msec that is a factor of 10 smaller than the cavity decay time.

For the purpose of getting a better estimation of cavity bandwidth, we want to fit the transmission of End Mirror.
I tried three model, including gaussian function, generalized normal distribution and airy pattern function. However, none of them seems fit very well. Before proceeding to next step, I would like to ask for some suggestions.

Constant velocity assumption may be wrong? I'm not very clear. I can try with some acceleration or the combination of erf and exp as you suggested.

Trying to understand why the best fitting function is not a erf function (given the hypothesis that the beam is cut at constant speed): maybe the exponential decay we see in the data is dominated by the electronics ? one can also try to fit with a function erf + exp.

Yesterday the clean booth in TAMA central area has been installed. Currently we are working to reorganize the area inside it and reconnect the electronics.
At the link below you can find pictures taken to the optical table rack before we disconnect everything. They may help the repristination activity.
https://drive.google.com/open?id=1XDv4P4gmAJMNsEKLoFNGZLUkMw5nT9kr

According to the esponential fit, the decay time of the "hand cut" is about 0.6 ms which is roughly a factor 5 smaller than the expected decay time of the cavity. We will take some more measurements in order to check the dispersion of such value.

To measure cavity decay time, we are currently just cutting beam by bending a IR card and releasing it towards beam path. This method is not ideal and affect a signal. Since we don't have enough channels on oscilloscopes to conduct measurements at the same time, we cannot distinguish genuine cavity decay time and effect of not-ideal cutting method. Thus I tried to fit a signal obtained by cutting beam by hands. Data is attached as a txt file. Note that this data dosen't contains any effect other than cutting beam.
Two different functions are used for fitting; an error function (erf) and an exponential function. An erf is obtained by integrating a gaussian function. This seems plausible given a laser intensity transverse distribution is typically a gaussian. These functions are shown in a figure attached with resulting fitting parameters. I assumed a constant velocity to cut beam (IR card go across beam crosssection with a constant velocity).
From this calculation, exponential deccay is more fit.
Python codes used is also attached (please change .txt to .py if you try).
According to the esponential fit, the decay time of the "hand cut" is about 0.6 ms which is roughly a factor 5 smaller than the expected decay time of the cavity. We will take some more measurements in order to check the dispersion of such value.
Trying to understand why the best fitting function is not a erf function (given the hypothesis that the beam is cut at constant speed): maybe the exponential decay we see in the data is dominated by the electronics ? one can also try to fit with a function erf + exp.
Constant velocity assumption may be wrong? I'm not very clear. I can try with some acceleration or the combination of erf and exp as you suggested.
According to the fit the decay time is 0.3msec that is a factor of 10 smaller than the cavity decay time.
Actually, there is a factor 2 to take into accunt in the definition of the decay time we used, that is P = P0*exp(-2*t/tau)
(see https://www.osapublishing.org/oe/abstract.cfm?uri=oe-21-24-30114 )
So the decay time from the "hand cutting" fit should be: 2/tau = 3149 => tau = 0.6 ms. Anyway, since I used this definition also for computing the filter cavity decay time (about 2.7ms) if I'm not wrong we have a factor 5 of difference between the two, in any case.

We measured the cavity bandwidth and error signal for IR with three different velocity of frequency scan. In this case, we can give a reasonable estimation of the calibration factor of IR. The result is 180.7 +/-5 Hz/V. We didn't consider error of invidual measurement, the standard deviation only comes from these three measurements results.
velocity | bandwith | calibration |
200Hz/s | 106Hz | 176Hz/V |
400Hz/s | 112Hz | 186Hz/V |
80Hz/s | 108Hz | 180Hz/V |
For a better parameter estimation, we will fit these measurment result.

As pointed out by Matteo B., the spectrum in the entry693 was not correct. After some investigation, we found the problem comes from the conversion of .DAT file.
After solving this problem, we compared the spectrum of green and infrared error signal, taken with different value of loop's UGF(10kHz and 18kHz). Now we are using UGF as 18kHz.
In the attached plot there are calibrated spectrum. The calibration factor we are using for green is 385Hz/V, for infrared is 180Hz/V.
We also multiplied a factor to make green and infrared superpose. We can see from the attached plot that green and infrared have the same trend at high frequency. This is in agreement with the fact that after 1.4kHz we should also see the effect of the green cavity pole.
The .txt files attached are not calibrated.

Participants: Eleonora, Matteo L, Yuefan
We did a preliminary try to implement the dithering technique to keep the beam direction aligned with the cavity axis by acting on BS pitch and yaw.
We started with yaw:
1) We inject a sine perturbation in BS yaw with frequency 10 Hz and amplitude 3mV.
2) We acquired the transmitter green power in labview using one of the spare channel of the "telescope" ADC board.
3) We demodulated it by multipling it for a sine with the same frenquency and filtered it with a lowpass (butterworth 4th order, cutoff frequency 1 Hz)
4) We filtered the error signal with another lowpass (butterworth 1th order with cut off frequency at 0.01 Hz and adjustable gain).
5) We summed the correction signal to the "manual offset" in the yaw local control loop which is usually set by hand during the manual alignement procedure.
In Pic. 1 there is an "explained" scheme of the labview frontpanel, in Pic. 2 there is the block diagram of the vi.
The attached video shows the effect of the loop when we change the manual offset of BS yaw. The starting position is 0.02. We change it to 0.01 and to 0.
(See Pic.1 for a reference of the different controls and graph shown)
It seems that the loop is somehow able to bring back the offset to a position which makes the transmitted power less sensitive to the modulation. We need to check the long term performances and implement the same loop on the other degree of freedom.
DEMODULATION PHASE ISSUE
We tried to adjust the demodulation frequency by adding a tunable phase difference between the signal sent to the BS and the one used for the demodulation. With the loop open, we tried to change the demodulation phase in order to maximize the error signal but we couldn't see any change. We suspect that there is a problem with the reset of the subvi used to generate the sine wave. We might have found a solution that we will try soon. Anyway for the moment the demodulation phase is not optimized.
Link to the video in mp4 format.
https://drive.google.com/file/d/178Y6unT0S023VQCVl7pdYEdYgbTta22U/view?usp=sharing

Today, We try to use a way to measure the decay time of our filter cavity. The way is to cut the incident laser mechanically. By taking the data of oscilloscope, I used this function to fit
y=np.exp(-t/a1)+np.exp(-t/a2)
The reason I use this function is we have two decay mechanisms. One is mechanical cutting, the other is cavity decaying. The fitting result is a1=0.000326, a2=0.0025. This means the cutting time is 0.000326s and cavity decay is 0.0025s.(See attached Fig 1)

I've compared the error signals measured in the entry 690 with the new ones. There is something strange: now the error signal for the IR is 10 times smaller than before.

PARTICIPANTS: Yuhang, Yuefan, Eleonora
In the past days we have monitored the cavity round trip losses. We computed them from the cavity reflectivity with the tecniques described here.
In the actual setup the losses are measured using the IR reflected beam, sensend by a TAMA fotodiode. The reflected beam is filtered whith a bandpass filter in order to get rid of the residual green and it is focused on the photodiode using a 2 inch lens with f = 30 mm. (See first attachment for the setup scheme)
With this setup we have found that the reflectivity (ratio between reflected power in lock and out of lock) changes from day to day and takes values between 0.88% and 0.82%. It corresponds to a variation in the RTL between 40 ppm and 75 ppm.
The change can be due to the different alignment condition (the beams impinges on different points of the mirror which scatter differently) and/or to some other factor affecting the measurement and not yet understood.
In the attached plotes there are some measurments from the last days. Unfortunately not all the measurements from which we deduce the RTL variation reported before have been recorded.
In order to increase the statistic yesterday we repeated the measurement of the round trip losses, with the lock unlock technique.
Since we did it in two different moments of the day the alignement conditions were likely to be different.
reflectivity | losses | |
#1 | 0.87±0.02 | 50±13 |
#2 | 0.80±0.03 | 81±16 |
The reflectivity has been computed by taking the mean of the time series between a lock and an unlock period. The error is computed as the progagation of the standard deviation of these two set of data.
We estimated that 7% of the input light does not couple into the cavity.
New did a new measurement of RTL with lock/unlock.
Reflectivity 84% +/- 2% => Losses 63±12 ppm
We considered that 7% of the input light is not coupled into the cavity.
Loss measurement 28/03/18
Reflectivity: 89%+/- 2.5% => Losses: 44 +/- 12 ppm
Mismatching/misalignement considered in the estimation: 11% (worse than usual)